Dec 16 11:55:25 crc systemd[1]: Starting Kubernetes Kubelet... Dec 16 11:55:25 crc restorecon[4566]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:25 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:26 crc restorecon[4566]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 16 11:55:26 crc restorecon[4566]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 16 11:55:26 crc kubenswrapper[4805]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 11:55:26 crc kubenswrapper[4805]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 16 11:55:26 crc kubenswrapper[4805]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 11:55:26 crc kubenswrapper[4805]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 11:55:26 crc kubenswrapper[4805]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 16 11:55:26 crc kubenswrapper[4805]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.382900 4805 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386625 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386657 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386670 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386676 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386681 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386686 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386690 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386697 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386702 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386707 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386711 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386716 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386720 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386725 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386735 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386739 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386744 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386748 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386753 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386757 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386762 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386767 4805 feature_gate.go:330] unrecognized feature gate: Example Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386771 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386778 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386782 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386787 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386792 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386801 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386808 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386815 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386820 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386825 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386830 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386836 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386840 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386847 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386852 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386857 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386861 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386870 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386875 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386883 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386892 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386897 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386903 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386908 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386913 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386918 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386923 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386928 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386934 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386942 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386946 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386949 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386953 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386957 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386961 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386964 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386968 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386973 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386978 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386981 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386987 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.386996 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.387004 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.387009 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.387013 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.387017 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.387021 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.387025 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.387031 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387461 4805 flags.go:64] FLAG: --address="0.0.0.0" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387476 4805 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387497 4805 flags.go:64] FLAG: --anonymous-auth="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387507 4805 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387517 4805 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387522 4805 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387555 4805 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387838 4805 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387848 4805 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387853 4805 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387858 4805 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387864 4805 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387869 4805 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387874 4805 flags.go:64] FLAG: --cgroup-root="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387878 4805 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387883 4805 flags.go:64] FLAG: --client-ca-file="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387887 4805 flags.go:64] FLAG: --cloud-config="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387892 4805 flags.go:64] FLAG: --cloud-provider="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387896 4805 flags.go:64] FLAG: --cluster-dns="[]" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387905 4805 flags.go:64] FLAG: --cluster-domain="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387909 4805 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387914 4805 flags.go:64] FLAG: --config-dir="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387918 4805 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387924 4805 flags.go:64] FLAG: --container-log-max-files="5" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387930 4805 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387936 4805 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387941 4805 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387946 4805 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387951 4805 flags.go:64] FLAG: --contention-profiling="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387955 4805 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387959 4805 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387964 4805 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387968 4805 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387974 4805 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387979 4805 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387984 4805 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387988 4805 flags.go:64] FLAG: --enable-load-reader="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387992 4805 flags.go:64] FLAG: --enable-server="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.387997 4805 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388005 4805 flags.go:64] FLAG: --event-burst="100" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388013 4805 flags.go:64] FLAG: --event-qps="50" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388018 4805 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388022 4805 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388026 4805 flags.go:64] FLAG: --eviction-hard="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388048 4805 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388052 4805 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388057 4805 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388061 4805 flags.go:64] FLAG: --eviction-soft="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388065 4805 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388070 4805 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388074 4805 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388078 4805 flags.go:64] FLAG: --experimental-mounter-path="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388082 4805 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388086 4805 flags.go:64] FLAG: --fail-swap-on="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388090 4805 flags.go:64] FLAG: --feature-gates="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388096 4805 flags.go:64] FLAG: --file-check-frequency="20s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388101 4805 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388107 4805 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388111 4805 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388116 4805 flags.go:64] FLAG: --healthz-port="10248" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388121 4805 flags.go:64] FLAG: --help="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388125 4805 flags.go:64] FLAG: --hostname-override="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388129 4805 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388134 4805 flags.go:64] FLAG: --http-check-frequency="20s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388151 4805 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388158 4805 flags.go:64] FLAG: --image-credential-provider-config="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388169 4805 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388175 4805 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388180 4805 flags.go:64] FLAG: --image-service-endpoint="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388186 4805 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388192 4805 flags.go:64] FLAG: --kube-api-burst="100" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388197 4805 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388205 4805 flags.go:64] FLAG: --kube-api-qps="50" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388210 4805 flags.go:64] FLAG: --kube-reserved="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388215 4805 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388221 4805 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388225 4805 flags.go:64] FLAG: --kubelet-cgroups="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388229 4805 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388234 4805 flags.go:64] FLAG: --lock-file="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388238 4805 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388242 4805 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388246 4805 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388254 4805 flags.go:64] FLAG: --log-json-split-stream="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388259 4805 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388263 4805 flags.go:64] FLAG: --log-text-split-stream="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388268 4805 flags.go:64] FLAG: --logging-format="text" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388273 4805 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388277 4805 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388281 4805 flags.go:64] FLAG: --manifest-url="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388286 4805 flags.go:64] FLAG: --manifest-url-header="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388295 4805 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388300 4805 flags.go:64] FLAG: --max-open-files="1000000" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388306 4805 flags.go:64] FLAG: --max-pods="110" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388311 4805 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388315 4805 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388319 4805 flags.go:64] FLAG: --memory-manager-policy="None" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388324 4805 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388328 4805 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388332 4805 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388337 4805 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388354 4805 flags.go:64] FLAG: --node-status-max-images="50" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388359 4805 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388363 4805 flags.go:64] FLAG: --oom-score-adj="-999" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388367 4805 flags.go:64] FLAG: --pod-cidr="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388372 4805 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388380 4805 flags.go:64] FLAG: --pod-manifest-path="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388385 4805 flags.go:64] FLAG: --pod-max-pids="-1" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388391 4805 flags.go:64] FLAG: --pods-per-core="0" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388397 4805 flags.go:64] FLAG: --port="10250" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388409 4805 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388415 4805 flags.go:64] FLAG: --provider-id="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388420 4805 flags.go:64] FLAG: --qos-reserved="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388426 4805 flags.go:64] FLAG: --read-only-port="10255" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388432 4805 flags.go:64] FLAG: --register-node="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388438 4805 flags.go:64] FLAG: --register-schedulable="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388444 4805 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388456 4805 flags.go:64] FLAG: --registry-burst="10" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388461 4805 flags.go:64] FLAG: --registry-qps="5" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388467 4805 flags.go:64] FLAG: --reserved-cpus="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388483 4805 flags.go:64] FLAG: --reserved-memory="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388491 4805 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388495 4805 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388500 4805 flags.go:64] FLAG: --rotate-certificates="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388504 4805 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388510 4805 flags.go:64] FLAG: --runonce="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388514 4805 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388518 4805 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388523 4805 flags.go:64] FLAG: --seccomp-default="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388527 4805 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388531 4805 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388536 4805 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388540 4805 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388545 4805 flags.go:64] FLAG: --storage-driver-password="root" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388549 4805 flags.go:64] FLAG: --storage-driver-secure="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388553 4805 flags.go:64] FLAG: --storage-driver-table="stats" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388557 4805 flags.go:64] FLAG: --storage-driver-user="root" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388561 4805 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388565 4805 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388570 4805 flags.go:64] FLAG: --system-cgroups="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388574 4805 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388582 4805 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388586 4805 flags.go:64] FLAG: --tls-cert-file="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388590 4805 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388598 4805 flags.go:64] FLAG: --tls-min-version="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388601 4805 flags.go:64] FLAG: --tls-private-key-file="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388605 4805 flags.go:64] FLAG: --topology-manager-policy="none" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388609 4805 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388618 4805 flags.go:64] FLAG: --topology-manager-scope="container" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388623 4805 flags.go:64] FLAG: --v="2" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388631 4805 flags.go:64] FLAG: --version="false" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388642 4805 flags.go:64] FLAG: --vmodule="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388652 4805 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.388658 4805 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388786 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388792 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388797 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388801 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388805 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388809 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388812 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388816 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388820 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388824 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388828 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388832 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388836 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388840 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388844 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388847 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388879 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388884 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388887 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388891 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388895 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388898 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388902 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388905 4805 feature_gate.go:330] unrecognized feature gate: Example Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388909 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388913 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388918 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388922 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388926 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388930 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388933 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388937 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388941 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388946 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388950 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388954 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388958 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388962 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388966 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388970 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388974 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388977 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388981 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388984 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388988 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388992 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.388996 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389000 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389006 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389009 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389014 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389018 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389022 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389026 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389029 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389033 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389036 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389040 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389044 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389048 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389053 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389057 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389061 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389065 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389069 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389073 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389076 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389080 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389084 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389088 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.389093 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.389388 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.397434 4805 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.397484 4805 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397555 4805 feature_gate.go:330] unrecognized feature gate: Example Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397565 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397570 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397575 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397581 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397588 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397593 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397597 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397603 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397609 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397614 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397620 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397625 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397630 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397634 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397639 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397643 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397647 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397651 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397654 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397658 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397662 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397666 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397670 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397673 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397677 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397682 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397685 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397689 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397698 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397702 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397706 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397709 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397713 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397717 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397720 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397724 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397728 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397732 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397735 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397739 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397744 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397749 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397756 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397760 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397764 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397768 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397773 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397777 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397781 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397785 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397789 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397793 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397797 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397801 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397805 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397810 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397815 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397819 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397823 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397826 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397830 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397835 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397840 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397844 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397849 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397853 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397857 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397861 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397865 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.397869 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.397876 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398012 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398019 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398025 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398030 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398035 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398040 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398044 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398048 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398052 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398055 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398059 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398063 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398066 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398070 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398074 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398078 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398081 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398085 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398089 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398092 4805 feature_gate.go:330] unrecognized feature gate: Example Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398096 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398099 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398103 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398107 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398110 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398114 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398118 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398122 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398129 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398165 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398172 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398177 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398182 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398188 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398192 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398197 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398201 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398206 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398209 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398213 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398218 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398221 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398253 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398257 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398261 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398265 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398269 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398273 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398277 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398280 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398284 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398288 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398291 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398295 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398299 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398303 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398307 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398310 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398314 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398317 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398340 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398344 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398348 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398352 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398357 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398363 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398368 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398373 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398377 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398382 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.398386 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.398393 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.398645 4805 server.go:940] "Client rotation is on, will bootstrap in background" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.402120 4805 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.402240 4805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.402749 4805 server.go:997] "Starting client certificate rotation" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.402779 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.402985 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-03 01:46:35.133294628 +0000 UTC Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.403082 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 421h51m8.730214526s for next certificate rotation Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.407953 4805 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.410175 4805 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.417122 4805 log.go:25] "Validated CRI v1 runtime API" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.430009 4805 log.go:25] "Validated CRI v1 image API" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.431962 4805 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.436551 4805 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-16-11-50-02-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.436599 4805 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.447190 4805 manager.go:217] Machine: {Timestamp:2025-12-16 11:55:26.446326763 +0000 UTC m=+0.164584588 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199472640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7d5db2bf-38e9-4d90-94de-b757ce8f553c BootID:37a7c343-177b-430b-a5cf-e05c308f6740 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076107 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599734272 Type:vfs Inodes:3076107 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039894528 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599738368 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:35:e9:3d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:35:e9:3d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:73:2e:af Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:74:2e:70 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f9:e2:1a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d5:5f:52 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:9f:d4:83:e2:1c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9a:e5:0e:12:93:d4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199472640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.447449 4805 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.447733 4805 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.448458 4805 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.448640 4805 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.448678 4805 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.448886 4805 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.448898 4805 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.449331 4805 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.449372 4805 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.449682 4805 state_mem.go:36] "Initialized new in-memory state store" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.449764 4805 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.450351 4805 kubelet.go:418] "Attempting to sync node with API server" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.450368 4805 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.450392 4805 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.450404 4805 kubelet.go:324] "Adding apiserver pod source" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.450415 4805 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.452544 4805 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.453174 4805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.453801 4805 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454479 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454521 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454536 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454545 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454569 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454579 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454587 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454600 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454626 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454642 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454655 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454663 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.454566 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.454775 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.454620 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.454834 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.454885 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.455388 4805 server.go:1280] "Started kubelet" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.455714 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.455874 4805 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.456676 4805 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.455892 4805 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 11:55:26 crc systemd[1]: Started Kubernetes Kubelet. Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.457624 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.457706 4805 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.457964 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:19:28.121054769 +0000 UTC Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.458527 4805 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.458546 4805 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.459370 4805 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.461510 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1881b0137129689c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-16 11:55:26.455351452 +0000 UTC m=+0.173609257,LastTimestamp:2025-12-16 11:55:26.455351452 +0000 UTC m=+0.173609257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.463052 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.463217 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.463302 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.463506 4805 factory.go:55] Registering systemd factory Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.463537 4805 factory.go:221] Registration of the systemd container factory successfully Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.464253 4805 factory.go:153] Registering CRI-O factory Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.464269 4805 factory.go:221] Registration of the crio container factory successfully Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.464352 4805 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.464385 4805 factory.go:103] Registering Raw factory Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.464403 4805 manager.go:1196] Started watching for new ooms in manager Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.468342 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="200ms" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.468457 4805 manager.go:319] Starting recovery of all containers Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.470179 4805 server.go:460] "Adding debug handlers to kubelet server" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.474049 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.474112 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.474132 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.474181 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.474869 4805 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.474909 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.474931 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.474957 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.474981 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.474998 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475015 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475031 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475047 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475062 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475081 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475096 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475111 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475124 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475159 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475193 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475208 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475224 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475240 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475257 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475275 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475292 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475307 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475326 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475344 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475359 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475375 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475390 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475403 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475418 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475433 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475447 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475463 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475479 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475493 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475511 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475534 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475549 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475564 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475580 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475595 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475610 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475626 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475640 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475655 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475671 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475722 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475740 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475757 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475778 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475797 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475834 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475853 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475868 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475885 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475901 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475917 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475936 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475951 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475967 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.475982 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476000 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476017 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476033 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476049 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476088 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476119 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476140 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476179 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476197 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476213 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476230 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476246 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476263 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476278 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476294 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476311 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476326 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476342 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476357 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476372 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476388 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476421 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476437 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476455 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476472 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476488 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476505 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476520 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476537 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476554 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476570 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476587 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476602 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476620 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476636 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476654 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476689 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476703 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476720 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476739 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476772 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476790 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476808 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476825 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476843 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476859 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476879 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476934 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476954 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476971 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.476987 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477006 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477026 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477081 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477099 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477118 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477136 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477189 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477221 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477237 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477253 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477269 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477289 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477305 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477320 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477340 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477355 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477372 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477387 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477403 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477420 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477438 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477455 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477472 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477487 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477502 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477518 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477541 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477557 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477572 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477588 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477604 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477622 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477641 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477658 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477676 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477692 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477712 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477729 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477745 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477761 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477777 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477793 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477809 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477824 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477841 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477880 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477898 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477915 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477932 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477948 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477963 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477979 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.477998 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.478014 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.478029 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.478045 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.478061 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.478079 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.478094 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.478109 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.478126 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.478162 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.478179 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.478195 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.481518 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483304 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483353 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483369 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483385 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483396 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483409 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483430 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483448 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483472 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483515 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483539 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483552 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483569 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483583 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483594 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483605 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483615 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483625 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483635 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483644 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483654 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483664 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483673 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483710 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483720 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483730 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483741 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483749 4805 reconstruct.go:97] "Volume reconstruction finished" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.483757 4805 reconciler.go:26] "Reconciler: start to sync state" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.497700 4805 manager.go:324] Recovery completed Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.510604 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.513917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.514178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.514303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.516416 4805 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.516461 4805 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.516489 4805 state_mem.go:36] "Initialized new in-memory state store" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.518643 4805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.521321 4805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.521407 4805 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.521431 4805 kubelet.go:2335] "Starting kubelet main sync loop" Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.521489 4805 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.522427 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.522511 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.524681 4805 policy_none.go:49] "None policy: Start" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.526656 4805 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.526688 4805 state_mem.go:35] "Initializing new in-memory state store" Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.564125 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.578286 4805 manager.go:334] "Starting Device Plugin manager" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.578341 4805 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.578354 4805 server.go:79] "Starting device plugin registration server" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.578721 4805 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.578792 4805 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.578967 4805 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.579095 4805 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.579106 4805 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.585639 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.622354 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.622508 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.623409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.623463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.623475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.623646 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.623915 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.623986 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.624602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.624624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.624634 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.624738 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.624882 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.624920 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.625381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.625403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.625412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.625474 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.625760 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.625827 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.625838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.625868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.625883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.626046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.626061 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.626068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.626166 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.626258 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.626283 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.626630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.626652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.626662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.627180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.627201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.627255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.627266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.627209 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.627330 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.627210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.627412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.627420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.627702 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.627742 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.628386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.628404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.628412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.669484 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="400ms" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.679722 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.680698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.680764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.680779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.680809 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.681348 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.685550 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.685598 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.685628 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.685712 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.685764 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.685800 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.685851 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.685888 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.685917 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.685969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.685994 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.686020 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.686045 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.686071 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.686098 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787408 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787516 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787583 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787607 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787648 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787615 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787727 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787784 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787809 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787813 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787840 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787852 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787911 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787944 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787978 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.787995 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788029 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788043 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788059 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788040 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788106 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788078 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788130 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788169 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788196 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788226 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788248 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788265 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.788096 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.881438 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.887239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.887288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.887298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.887322 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 16 11:55:26 crc kubenswrapper[4805]: E1216 11:55:26.887812 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.964446 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: W1216 11:55:26.986896 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ca66eff90542241504f9402776d415eff2a069d1fa81e9b1a31d857dea0986e2 WatchSource:0}: Error finding container ca66eff90542241504f9402776d415eff2a069d1fa81e9b1a31d857dea0986e2: Status 404 returned error can't find the container with id ca66eff90542241504f9402776d415eff2a069d1fa81e9b1a31d857dea0986e2 Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.991914 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:26 crc kubenswrapper[4805]: I1216 11:55:26.999425 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 16 11:55:27 crc kubenswrapper[4805]: W1216 11:55:27.012432 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b8c11ae7be30be6fb299feabbc4b197a6b63dbd9f43319f3716c2ef293500569 WatchSource:0}: Error finding container b8c11ae7be30be6fb299feabbc4b197a6b63dbd9f43319f3716c2ef293500569: Status 404 returned error can't find the container with id b8c11ae7be30be6fb299feabbc4b197a6b63dbd9f43319f3716c2ef293500569 Dec 16 11:55:27 crc kubenswrapper[4805]: W1216 11:55:27.014305 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-206403a707825e00bb6223734e408dfd292b169fd8dacb7815af99cfe56082ba WatchSource:0}: Error finding container 206403a707825e00bb6223734e408dfd292b169fd8dacb7815af99cfe56082ba: Status 404 returned error can't find the container with id 206403a707825e00bb6223734e408dfd292b169fd8dacb7815af99cfe56082ba Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.024888 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.028980 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 16 11:55:27 crc kubenswrapper[4805]: W1216 11:55:27.036595 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0d6ff5c15edf7e4c4ae2e6db530eae5c22f3ddf28d3936cfe8182aacb81ec7a4 WatchSource:0}: Error finding container 0d6ff5c15edf7e4c4ae2e6db530eae5c22f3ddf28d3936cfe8182aacb81ec7a4: Status 404 returned error can't find the container with id 0d6ff5c15edf7e4c4ae2e6db530eae5c22f3ddf28d3936cfe8182aacb81ec7a4 Dec 16 11:55:27 crc kubenswrapper[4805]: W1216 11:55:27.040210 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-3344710aaa42a7d19b4b3f54f595126fcb585767e921141576abb8728ccb19b6 WatchSource:0}: Error finding container 3344710aaa42a7d19b4b3f54f595126fcb585767e921141576abb8728ccb19b6: Status 404 returned error can't find the container with id 3344710aaa42a7d19b4b3f54f595126fcb585767e921141576abb8728ccb19b6 Dec 16 11:55:27 crc kubenswrapper[4805]: E1216 11:55:27.070772 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="800ms" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.288041 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.289218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.289483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.289492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.289519 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 16 11:55:27 crc kubenswrapper[4805]: E1216 11:55:27.289881 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Dec 16 11:55:27 crc kubenswrapper[4805]: W1216 11:55:27.293197 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Dec 16 11:55:27 crc kubenswrapper[4805]: E1216 11:55:27.293268 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Dec 16 11:55:27 crc kubenswrapper[4805]: W1216 11:55:27.317173 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Dec 16 11:55:27 crc kubenswrapper[4805]: E1216 11:55:27.317247 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.457102 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.458275 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:26:59.802207403 +0000 UTC Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.526931 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1" exitCode=0 Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.526994 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1"} Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.527057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3344710aaa42a7d19b4b3f54f595126fcb585767e921141576abb8728ccb19b6"} Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.527174 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.528198 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.528226 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.528238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.528560 4805 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155" exitCode=0 Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.528610 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155"} Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.528627 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0d6ff5c15edf7e4c4ae2e6db530eae5c22f3ddf28d3936cfe8182aacb81ec7a4"} Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.528817 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.530854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.530881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.530896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.533134 4805 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5" exitCode=0 Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.533163 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5"} Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.533199 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"206403a707825e00bb6223734e408dfd292b169fd8dacb7815af99cfe56082ba"} Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.533267 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.534127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.534163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.534175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:27 crc kubenswrapper[4805]: W1216 11:55:27.535298 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Dec 16 11:55:27 crc kubenswrapper[4805]: E1216 11:55:27.535404 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.538526 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90" exitCode=0 Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.538595 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90"} Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.538626 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b8c11ae7be30be6fb299feabbc4b197a6b63dbd9f43319f3716c2ef293500569"} Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.538724 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.539610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.539641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.539652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.541272 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030"} Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.541310 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ca66eff90542241504f9402776d415eff2a069d1fa81e9b1a31d857dea0986e2"} Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.541710 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.542315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.542342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:27 crc kubenswrapper[4805]: I1216 11:55:27.542353 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:27 crc kubenswrapper[4805]: W1216 11:55:27.838445 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Dec 16 11:55:27 crc kubenswrapper[4805]: E1216 11:55:27.838672 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Dec 16 11:55:27 crc kubenswrapper[4805]: E1216 11:55:27.871639 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="1.6s" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.090174 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.092864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.092893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.092903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.092938 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 16 11:55:28 crc kubenswrapper[4805]: E1216 11:55:28.093736 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.460193 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:33:25.813531295 +0000 UTC Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.460285 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 346h37m57.353249073s for next certificate rotation Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.544895 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a" exitCode=0 Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.544978 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.545167 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.545886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.545925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.545935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.547339 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"87e366188c71f88353931b92e19bc6a1c2b58eb3a55bf872be24918600a1f38e"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.547376 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"11ed200ae81bc9ecf0225a69d59eb4385bcba5823c87abc590d781340c60490f"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.547389 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1f08fe672a7cdd5082cea8ed2cacc5c71754493f2470a7a81fa6baff03eee5c8"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.547488 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.548285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.548310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.548319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.550348 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bc9e9f3d773e7d01ae06a38b49b8e51be1fc9036229525ba27b0c1bd27078555"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.550417 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.553103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.553175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.553188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.555337 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.555373 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.555388 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.555400 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.557718 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.557746 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.557763 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9"} Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.557802 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.558874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.558898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:28 crc kubenswrapper[4805]: I1216 11:55:28.558908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.563662 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542" exitCode=0 Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.563802 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542"} Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.564038 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.565558 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.565609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.565623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.571419 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca"} Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.571483 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.571488 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.572825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.572870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.572886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.574035 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.574075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.574093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.694837 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.697202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.697243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.697253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:29 crc kubenswrapper[4805]: I1216 11:55:29.697274 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.576899 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e"} Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.576950 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.576949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615"} Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.577035 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.577056 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.577055 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a"} Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.577530 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828"} Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.577543 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49"} Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.577944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.577974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.577986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.578757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.578786 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:30 crc kubenswrapper[4805]: I1216 11:55:30.578798 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:31 crc kubenswrapper[4805]: I1216 11:55:31.579936 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:31 crc kubenswrapper[4805]: I1216 11:55:31.579994 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:31 crc kubenswrapper[4805]: I1216 11:55:31.581560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:31 crc kubenswrapper[4805]: I1216 11:55:31.581701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:31 crc kubenswrapper[4805]: I1216 11:55:31.581770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:31 crc kubenswrapper[4805]: I1216 11:55:31.581860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:31 crc kubenswrapper[4805]: I1216 11:55:31.581882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:31 crc kubenswrapper[4805]: I1216 11:55:31.581978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:31 crc kubenswrapper[4805]: I1216 11:55:31.789855 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 16 11:55:31 crc kubenswrapper[4805]: I1216 11:55:31.865748 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:31 crc kubenswrapper[4805]: I1216 11:55:31.948787 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:32 crc kubenswrapper[4805]: I1216 11:55:32.582005 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:32 crc kubenswrapper[4805]: I1216 11:55:32.582005 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:32 crc kubenswrapper[4805]: I1216 11:55:32.582846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:32 crc kubenswrapper[4805]: I1216 11:55:32.582891 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:32 crc kubenswrapper[4805]: I1216 11:55:32.582909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:32 crc kubenswrapper[4805]: I1216 11:55:32.583321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:32 crc kubenswrapper[4805]: I1216 11:55:32.583362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:32 crc kubenswrapper[4805]: I1216 11:55:32.583376 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:33 crc kubenswrapper[4805]: I1216 11:55:33.584043 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:33 crc kubenswrapper[4805]: I1216 11:55:33.588505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:33 crc kubenswrapper[4805]: I1216 11:55:33.588550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:33 crc kubenswrapper[4805]: I1216 11:55:33.588562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:35 crc kubenswrapper[4805]: I1216 11:55:35.028177 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 16 11:55:35 crc kubenswrapper[4805]: I1216 11:55:35.028362 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:35 crc kubenswrapper[4805]: I1216 11:55:35.029381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:35 crc kubenswrapper[4805]: I1216 11:55:35.029405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:35 crc kubenswrapper[4805]: I1216 11:55:35.029416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.235839 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.236012 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.237022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.237070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.237086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.483256 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.488042 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:36 crc kubenswrapper[4805]: E1216 11:55:36.585752 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.589703 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.590595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.590686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.590753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:36 crc kubenswrapper[4805]: I1216 11:55:36.594885 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:37 crc kubenswrapper[4805]: I1216 11:55:37.095431 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:37 crc kubenswrapper[4805]: I1216 11:55:37.591996 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:37 crc kubenswrapper[4805]: I1216 11:55:37.592961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:37 crc kubenswrapper[4805]: I1216 11:55:37.593084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:37 crc kubenswrapper[4805]: I1216 11:55:37.593173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:37 crc kubenswrapper[4805]: I1216 11:55:37.601036 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:37 crc kubenswrapper[4805]: I1216 11:55:37.715542 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 16 11:55:37 crc kubenswrapper[4805]: I1216 11:55:37.715720 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:37 crc kubenswrapper[4805]: I1216 11:55:37.716662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:37 crc kubenswrapper[4805]: I1216 11:55:37.716696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:37 crc kubenswrapper[4805]: I1216 11:55:37.716707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:38 crc kubenswrapper[4805]: I1216 11:55:38.039335 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 16 11:55:38 crc kubenswrapper[4805]: I1216 11:55:38.039604 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 16 11:55:38 crc kubenswrapper[4805]: I1216 11:55:38.336632 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 16 11:55:38 crc kubenswrapper[4805]: I1216 11:55:38.336764 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 16 11:55:38 crc kubenswrapper[4805]: I1216 11:55:38.341408 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 16 11:55:38 crc kubenswrapper[4805]: I1216 11:55:38.341468 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 16 11:55:38 crc kubenswrapper[4805]: I1216 11:55:38.594774 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:38 crc kubenswrapper[4805]: I1216 11:55:38.595759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:38 crc kubenswrapper[4805]: I1216 11:55:38.595831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:38 crc kubenswrapper[4805]: I1216 11:55:38.595844 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:39 crc kubenswrapper[4805]: I1216 11:55:39.596297 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:39 crc kubenswrapper[4805]: I1216 11:55:39.597070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:39 crc kubenswrapper[4805]: I1216 11:55:39.597120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:39 crc kubenswrapper[4805]: I1216 11:55:39.597129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:40 crc kubenswrapper[4805]: I1216 11:55:40.096201 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 16 11:55:40 crc kubenswrapper[4805]: I1216 11:55:40.096274 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 16 11:55:40 crc kubenswrapper[4805]: I1216 11:55:40.715321 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 16 11:55:40 crc kubenswrapper[4805]: I1216 11:55:40.715437 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 16 11:55:41 crc kubenswrapper[4805]: I1216 11:55:41.872468 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:41 crc kubenswrapper[4805]: I1216 11:55:41.872658 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:41 crc kubenswrapper[4805]: I1216 11:55:41.873469 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 16 11:55:41 crc kubenswrapper[4805]: I1216 11:55:41.873504 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 16 11:55:41 crc kubenswrapper[4805]: I1216 11:55:41.873742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:41 crc kubenswrapper[4805]: I1216 11:55:41.873764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:41 crc kubenswrapper[4805]: I1216 11:55:41.873774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:41 crc kubenswrapper[4805]: I1216 11:55:41.876996 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:42 crc kubenswrapper[4805]: I1216 11:55:42.602937 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:42 crc kubenswrapper[4805]: I1216 11:55:42.603415 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 16 11:55:42 crc kubenswrapper[4805]: I1216 11:55:42.603456 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 16 11:55:42 crc kubenswrapper[4805]: I1216 11:55:42.603865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:42 crc kubenswrapper[4805]: I1216 11:55:42.603899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:42 crc kubenswrapper[4805]: I1216 11:55:42.603910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.350032 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.353366 4805 trace.go:236] Trace[857749119]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Dec-2025 11:55:30.937) (total time: 12415ms): Dec 16 11:55:43 crc kubenswrapper[4805]: Trace[857749119]: ---"Objects listed" error: 12415ms (11:55:43.353) Dec 16 11:55:43 crc kubenswrapper[4805]: Trace[857749119]: [12.415520242s] [12.415520242s] END Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.353394 4805 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.353608 4805 trace.go:236] Trace[616804988]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Dec-2025 11:55:30.231) (total time: 13121ms): Dec 16 11:55:43 crc kubenswrapper[4805]: Trace[616804988]: ---"Objects listed" error: 13121ms (11:55:43.353) Dec 16 11:55:43 crc kubenswrapper[4805]: Trace[616804988]: [13.121712362s] [13.121712362s] END Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.353631 4805 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.354208 4805 trace.go:236] Trace[268568844]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Dec-2025 11:55:30.414) (total time: 12939ms): Dec 16 11:55:43 crc kubenswrapper[4805]: Trace[268568844]: ---"Objects listed" error: 12939ms (11:55:43.354) Dec 16 11:55:43 crc kubenswrapper[4805]: Trace[268568844]: [12.939294761s] [12.939294761s] END Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.354235 4805 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.356479 4805 trace.go:236] Trace[1496925440]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Dec-2025 11:55:30.337) (total time: 13018ms): Dec 16 11:55:43 crc kubenswrapper[4805]: Trace[1496925440]: ---"Objects listed" error: 13018ms (11:55:43.356) Dec 16 11:55:43 crc kubenswrapper[4805]: Trace[1496925440]: [13.018577321s] [13.018577321s] END Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.356500 4805 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.356653 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.358606 4805 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.462852 4805 apiserver.go:52] "Watching apiserver" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.465384 4805 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.465608 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.466004 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.466055 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.466101 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.466326 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.466355 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.466589 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.466639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.466704 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.466736 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.469781 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.469980 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.469997 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.470246 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.470550 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.472158 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.474332 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.478131 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.484861 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.501284 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.518566 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.529220 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.541918 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.555683 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.560331 4805 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.573569 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.581426 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.594460 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.610389 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660131 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660210 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660238 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660263 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660289 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660335 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660356 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660386 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660407 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660431 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660434 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660455 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660519 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660538 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660555 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660570 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660584 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660600 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660616 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660625 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660638 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660691 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660725 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660710 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660746 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660769 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660794 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660814 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660834 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660858 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660883 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660904 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660925 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660945 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660966 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660986 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661007 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661028 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661051 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661075 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661095 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661117 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661156 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661180 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661203 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661225 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661245 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661264 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661285 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661305 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661326 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661345 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660766 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661364 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662007 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662035 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662063 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662086 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662130 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662172 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662197 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662221 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662269 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662304 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662362 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662387 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662415 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662441 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662494 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662515 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662538 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662562 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662583 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662606 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662659 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662706 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662728 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662751 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662773 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662797 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662820 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662847 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662871 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662896 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662919 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662942 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662964 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662986 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663006 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663036 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663062 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663084 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663121 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663179 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663206 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663231 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663253 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663295 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663315 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663338 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663359 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663394 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663415 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663442 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663465 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663486 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663507 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663528 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663548 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663569 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663589 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663608 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663633 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663653 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663678 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663698 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663719 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663739 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663758 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663779 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663800 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663858 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663879 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663918 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663940 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663964 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663985 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664005 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664028 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664048 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664067 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664088 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664107 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664128 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664165 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664186 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664206 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664226 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664267 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664316 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664341 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664362 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664383 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664404 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664424 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664452 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664473 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664573 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664598 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664619 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664641 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664663 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664684 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664705 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664725 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664747 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664768 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664797 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664818 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664849 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664871 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664893 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664913 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664933 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664954 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664977 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.664999 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665021 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665042 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665063 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665085 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665106 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665129 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665286 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665312 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665339 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665363 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665387 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665410 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665434 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665456 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665479 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665500 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665522 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665545 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665567 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665594 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665616 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665641 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665664 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665688 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665711 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665734 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665755 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665806 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665839 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665866 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665894 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665924 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665953 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665980 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666005 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666032 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666059 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666086 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666114 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666158 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666185 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666255 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666274 4805 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666289 4805 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666303 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.667045 4805 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660772 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660790 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.660905 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661182 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661221 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661349 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661369 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661404 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661433 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661566 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661647 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661767 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661816 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661851 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661896 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.661566 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662155 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662252 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662312 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662330 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662341 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662444 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662522 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662533 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662541 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662754 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662878 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.662906 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663088 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663109 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663173 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663185 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.663246 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665407 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665615 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665629 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.665778 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666255 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666266 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666451 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666612 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.666826 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.668357 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.668557 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.668766 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.668915 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.669069 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.669124 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.669192 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.669385 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.669407 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.669519 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.669615 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.669846 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.670011 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.670030 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.671207 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.670122 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.670382 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.670625 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.680120 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.680679 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.680942 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.681177 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.681363 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.683242 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.683383 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.685237 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.685375 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.685513 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.685774 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.685798 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.686163 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.686245 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.686430 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:55:44.18626315 +0000 UTC m=+17.904521035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.686741 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.686896 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.687194 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.687352 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.687505 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.687636 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.687683 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.687796 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.688112 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.688121 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.688183 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.688405 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.688427 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.688618 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.689248 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.689570 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.689452 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.689644 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.689702 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.689852 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.689722 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.690056 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.690209 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.690220 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.690247 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.690279 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.690320 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.690459 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.690542 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.690662 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.690881 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.690953 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.691058 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.691109 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.691403 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.691478 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.691574 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.691736 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.691838 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.692006 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.692261 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.692276 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.692359 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.692386 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.692525 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.692588 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.692956 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.692968 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.693091 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.693217 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.693405 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.693583 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.693599 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.693691 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.693903 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.694163 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.694294 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.695841 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.696067 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.696184 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.694347 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.694361 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.694414 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.694457 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.694607 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.694626 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.694854 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.694912 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.695003 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.695068 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.695243 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.695275 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.695326 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.695653 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.696552 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.696585 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.696599 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.696746 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.696945 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.697502 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.698061 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:44.197550104 +0000 UTC m=+17.915807909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.700909 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.705455 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.705532 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.706779 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.707043 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.707218 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.707478 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.709008 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.709406 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.709460 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:44.209445396 +0000 UTC m=+17.927703291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.710419 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.711014 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.711257 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.711277 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.711288 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.711337 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:44.211321092 +0000 UTC m=+17.929578897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.712272 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.712510 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.715359 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.715744 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.715842 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.715870 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.715859 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.715884 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:43 crc kubenswrapper[4805]: E1216 11:55:43.715970 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:44.215934028 +0000 UTC m=+17.934191893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.716645 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.718274 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.718409 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.719654 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.719825 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.720047 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.720436 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.720605 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.720892 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.720811 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.721240 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.721357 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.721443 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.721170 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.721368 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.722576 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.722902 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.728515 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.728634 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.733617 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.733790 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.734695 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.734710 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.735134 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.742481 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.744268 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.744563 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.745971 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.755608 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.758606 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767021 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767083 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767231 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767250 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767263 4805 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767275 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767288 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767299 4805 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767309 4805 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767321 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767334 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767345 4805 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767358 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767369 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767381 4805 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767382 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767392 4805 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767441 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767451 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767462 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767474 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767485 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767494 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767504 4805 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767516 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767528 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767540 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767537 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767552 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767564 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767575 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767587 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767599 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767611 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767622 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767633 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767644 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767653 4805 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767663 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767673 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767683 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767692 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767702 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767711 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767720 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767730 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767740 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767752 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767763 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767774 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767785 4805 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767796 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767805 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767815 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767825 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767834 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767843 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767853 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767864 4805 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767872 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767881 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767891 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767901 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767910 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767921 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767930 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767939 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767950 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767961 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767972 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767983 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.767992 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768446 4805 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768463 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768473 4805 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768509 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768521 4805 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768532 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768571 4805 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768618 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768643 4805 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768654 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768664 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768675 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768685 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768695 4805 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768704 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768715 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768725 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768735 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768745 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768755 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768766 4805 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768777 4805 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768788 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768798 4805 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768809 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768819 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768828 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768838 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768847 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768856 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768865 4805 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768877 4805 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768892 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768901 4805 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768911 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768922 4805 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768932 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768943 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768954 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768965 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768976 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768987 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.768997 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769008 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769019 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769030 4805 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769041 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769052 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769061 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769071 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769081 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769097 4805 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769107 4805 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769118 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769129 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769158 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769169 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769179 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769189 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769198 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769208 4805 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769216 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769225 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769235 4805 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769246 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769256 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769267 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769279 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769288 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769298 4805 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769308 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769317 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769353 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769363 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769372 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769382 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769392 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769403 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769415 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769426 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769436 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769448 4805 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769459 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769470 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769481 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769490 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769500 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769509 4805 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769518 4805 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769528 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769540 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769550 4805 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769562 4805 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769574 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769585 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769596 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769607 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769618 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769630 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769642 4805 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769653 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769664 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769673 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769683 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769718 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769731 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769742 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769750 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769759 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769769 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769777 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769786 4805 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769794 4805 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769803 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769812 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769821 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769830 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769843 4805 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769854 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769863 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769872 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.769883 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.785258 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.792937 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 16 11:55:43 crc kubenswrapper[4805]: I1216 11:55:43.805738 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.273457 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.273641 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:55:45.273614773 +0000 UTC m=+18.991872578 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.273794 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.273828 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.273854 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.273876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.273948 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.274000 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:45.273984854 +0000 UTC m=+18.992242729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.274275 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.274298 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.274309 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.274307 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.274332 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.274343 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.274356 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:45.274348715 +0000 UTC m=+18.992606520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.274377 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:45.274365855 +0000 UTC m=+18.992623750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.274383 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:44 crc kubenswrapper[4805]: E1216 11:55:44.274456 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:45.274440097 +0000 UTC m=+18.992697902 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.525981 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.526811 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.528085 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.528800 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.529917 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.530444 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.531001 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.532010 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.532633 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.533548 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.534064 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.535283 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.535834 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.536369 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.537282 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.537771 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.538709 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.539122 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.540068 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.541337 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.541782 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.543922 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.544477 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.545250 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.545743 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.546463 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.547279 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.547826 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.548494 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.549076 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.549631 4805 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.549753 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.553626 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.554201 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.555175 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.557018 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.557818 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.558929 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.559693 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.560991 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.561539 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.562737 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.563499 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.564779 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.565327 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.566401 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.566975 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.569097 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.569995 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.571303 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.571933 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.573440 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.574339 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.575020 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.607605 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b"} Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.607645 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"adb96fdebb2879d83765702f693049e339492a36ce148b0231ae980a1bcbb032"} Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.609215 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.610446 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca" exitCode=255 Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.610494 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca"} Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.613939 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7a7043d5ade9bea5528ae2556d7743b0b8613123e38661be3c9b4789230eb5fd"} Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.616010 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199"} Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.616115 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409"} Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.616130 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dd2dbebf91fb4b7aa55996e7b38c9c10a1afef89f341ed767348702841c2e958"} Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.640098 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.640391 4805 scope.go:117] "RemoveContainer" containerID="f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.640424 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.654909 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.671086 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.689580 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.702350 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.716478 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.733637 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.746213 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.760619 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.779424 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.801094 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.828385 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:44 crc kubenswrapper[4805]: I1216 11:55:44.846845 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:44Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.344635 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.344695 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.344782 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.344827 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:47.344814736 +0000 UTC m=+21.063072541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.344864 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:55:47.344833756 +0000 UTC m=+21.063091561 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.344927 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.344988 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:47.34497185 +0000 UTC m=+21.063229755 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.345076 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.345111 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.345130 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.345222 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.345234 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.345243 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.345267 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:47.345259709 +0000 UTC m=+21.063517504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.345346 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.345372 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.345387 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.345435 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:47.345418363 +0000 UTC m=+21.063676188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.522195 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.522195 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.522326 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.522208 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.522404 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:55:45 crc kubenswrapper[4805]: E1216 11:55:45.522496 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.625640 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.628747 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159"} Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.628809 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.641190 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:45Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.655321 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:45Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.668197 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:45Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.680793 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:45Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.694099 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:45Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.741897 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:45Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:45 crc kubenswrapper[4805]: I1216 11:55:45.752762 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:45Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.536896 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.552019 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.557580 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.559345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.559395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.559412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.559475 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.570890 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.576402 4805 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.576681 4805 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.577794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.577841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.577856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.577875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.577891 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:46Z","lastTransitionTime":"2025-12-16T11:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:46 crc kubenswrapper[4805]: E1216 11:55:46.602327 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.603445 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.606097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.606132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.606174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.606194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.606206 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:46Z","lastTransitionTime":"2025-12-16T11:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.626010 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: E1216 11:55:46.628575 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.631934 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11"} Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.632952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.632996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.633013 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.633034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.633050 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:46Z","lastTransitionTime":"2025-12-16T11:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.643991 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: E1216 11:55:46.648425 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.652480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.652527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.652538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.652555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.652567 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:46Z","lastTransitionTime":"2025-12-16T11:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.657765 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: E1216 11:55:46.663453 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.666754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.666789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.666800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.666815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.666827 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:46Z","lastTransitionTime":"2025-12-16T11:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.669021 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: E1216 11:55:46.681681 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: E1216 11:55:46.682168 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.683238 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.684874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.684909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.684920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.684937 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.684950 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:46Z","lastTransitionTime":"2025-12-16T11:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.696343 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.708227 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.720746 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.731580 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.742925 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.787717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.787764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.787773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.787788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.787797 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:46Z","lastTransitionTime":"2025-12-16T11:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.890364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.890406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.890437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.890455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.890467 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:46Z","lastTransitionTime":"2025-12-16T11:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.993254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.993563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.993659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.993731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:46 crc kubenswrapper[4805]: I1216 11:55:46.993810 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:46Z","lastTransitionTime":"2025-12-16T11:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.097566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.097616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.097630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.097647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.097661 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:47Z","lastTransitionTime":"2025-12-16T11:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.099302 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.103201 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.107854 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.112060 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.123426 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.135736 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.148969 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.162827 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.179532 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.194976 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.199582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.199631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.199664 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.199682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.199696 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:47Z","lastTransitionTime":"2025-12-16T11:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.210213 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.222770 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.235700 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.250919 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.263721 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.277224 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.290498 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.302099 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.302192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.302202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.302217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.302244 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:47Z","lastTransitionTime":"2025-12-16T11:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.303910 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.362562 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.362690 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.362746 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.362794 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:55:51.362765498 +0000 UTC m=+25.081023303 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.362850 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.362890 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.362936 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.362966 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.362971 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.362999 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.362976 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:51.362956664 +0000 UTC m=+25.081214559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.363079 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.363093 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.363103 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.363108 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:51.363077537 +0000 UTC m=+25.081335392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.362903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.363175 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:51.363122349 +0000 UTC m=+25.081380234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.363210 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:51.363194151 +0000 UTC m=+25.081451996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.404453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.404729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.404823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.404918 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.405006 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:47Z","lastTransitionTime":"2025-12-16T11:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.507711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.507925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.508047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.508115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.508207 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:47Z","lastTransitionTime":"2025-12-16T11:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.522135 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.522463 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.522201 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.522718 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.522135 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:47 crc kubenswrapper[4805]: E1216 11:55:47.522971 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.610484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.610699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.610889 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.611087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.611290 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:47Z","lastTransitionTime":"2025-12-16T11:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.713966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.714713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.714817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.714934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.715036 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:47Z","lastTransitionTime":"2025-12-16T11:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.741570 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.757012 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.758195 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.758786 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.771629 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.785856 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.803324 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.814168 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.817645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.817685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.817695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.817710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.817719 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:47Z","lastTransitionTime":"2025-12-16T11:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.827227 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.841834 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.854492 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.871650 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.895332 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.912997 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.919428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.919461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.919469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.919484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.919494 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:47Z","lastTransitionTime":"2025-12-16T11:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.930392 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.944855 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.961257 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.978035 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:47 crc kubenswrapper[4805]: I1216 11:55:47.996740 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:47Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.021750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.021790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.021801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.021818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.021830 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:48Z","lastTransitionTime":"2025-12-16T11:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.022431 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:48Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.124625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.124672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.124683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.124704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.124715 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:48Z","lastTransitionTime":"2025-12-16T11:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.227051 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.227129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.227165 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.227185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.227195 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:48Z","lastTransitionTime":"2025-12-16T11:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.329725 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.329767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.329776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.329790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.329799 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:48Z","lastTransitionTime":"2025-12-16T11:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.431742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.431787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.431803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.431823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.431838 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:48Z","lastTransitionTime":"2025-12-16T11:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.533345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.533386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.533395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.533409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.533418 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:48Z","lastTransitionTime":"2025-12-16T11:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.636005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.636044 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.636054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.636071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.636081 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:48Z","lastTransitionTime":"2025-12-16T11:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.738540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.738581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.738591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.738606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.738615 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:48Z","lastTransitionTime":"2025-12-16T11:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.840941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.840992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.841002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.841020 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.841031 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:48Z","lastTransitionTime":"2025-12-16T11:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.942330 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-btjs7"] Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.942639 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-btjs7" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.942927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.942960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.942969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.942979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.942986 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:48Z","lastTransitionTime":"2025-12-16T11:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.947753 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.948160 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.952451 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.954246 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5gm98"] Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.954632 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.954955 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-vffbc"] Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.955344 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vffbc" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.956019 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-qljjv"] Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.956669 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.962648 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.963034 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.963204 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.963304 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.963546 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.964331 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.965316 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.965417 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.965604 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.968429 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.973580 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.977497 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 16 11:55:48 crc kubenswrapper[4805]: I1216 11:55:48.994037 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:48Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.006986 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.018078 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.035209 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.044606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.044636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.044645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.044657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.044667 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:49Z","lastTransitionTime":"2025-12-16T11:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.049214 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.060277 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.070248 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078184 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/52b42062-39de-4c28-b54d-5c63d046cb6d-hosts-file\") pod \"node-resolver-btjs7\" (UID: \"52b42062-39de-4c28-b54d-5c63d046cb6d\") " pod="openshift-dns/node-resolver-btjs7" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078226 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-run-netns\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078249 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz56s\" (UniqueName: \"kubernetes.io/projected/369287d8-0d6d-483f-8c4b-5439ae4d065c-kube-api-access-kz56s\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078275 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22790094-9e12-4de0-a0bf-5300bed8938f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078386 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-proxy-tls\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078427 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-var-lib-cni-bin\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078444 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-system-cni-dir\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-os-release\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078475 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-hostroot\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078493 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwvnx\" (UniqueName: \"kubernetes.io/projected/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-kube-api-access-jwvnx\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078542 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-cnibin\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078617 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-rootfs\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078653 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-var-lib-cni-multus\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078674 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-cnibin\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078692 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22790094-9e12-4de0-a0bf-5300bed8938f-cni-binary-copy\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078721 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-cni-dir\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078747 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-run-multus-certs\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078777 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2knq\" (UniqueName: \"kubernetes.io/projected/22790094-9e12-4de0-a0bf-5300bed8938f-kube-api-access-v2knq\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078827 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpdpg\" (UniqueName: \"kubernetes.io/projected/52b42062-39de-4c28-b54d-5c63d046cb6d-kube-api-access-zpdpg\") pod \"node-resolver-btjs7\" (UID: \"52b42062-39de-4c28-b54d-5c63d046cb6d\") " pod="openshift-dns/node-resolver-btjs7" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078861 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-os-release\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078887 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/369287d8-0d6d-483f-8c4b-5439ae4d065c-cni-binary-copy\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.078934 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-system-cni-dir\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.079000 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-socket-dir-parent\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.079044 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-run-k8s-cni-cncf-io\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.079065 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-etc-kubernetes\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.079098 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-mcd-auth-proxy-config\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.079119 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-conf-dir\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.079150 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-var-lib-kubelet\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.079184 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-daemon-config\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.079218 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.079399 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.093670 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.105341 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.116571 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.126504 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.137286 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.146607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.146637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.146645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.146658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.146667 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:49Z","lastTransitionTime":"2025-12-16T11:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.156777 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180073 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-socket-dir-parent\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180109 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-run-k8s-cni-cncf-io\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180124 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-etc-kubernetes\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-mcd-auth-proxy-config\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180174 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-conf-dir\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180188 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-var-lib-kubelet\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180203 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-daemon-config\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180218 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180238 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/52b42062-39de-4c28-b54d-5c63d046cb6d-hosts-file\") pod \"node-resolver-btjs7\" (UID: \"52b42062-39de-4c28-b54d-5c63d046cb6d\") " pod="openshift-dns/node-resolver-btjs7" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180252 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-run-netns\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180266 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz56s\" (UniqueName: \"kubernetes.io/projected/369287d8-0d6d-483f-8c4b-5439ae4d065c-kube-api-access-kz56s\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180279 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22790094-9e12-4de0-a0bf-5300bed8938f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180295 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-proxy-tls\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-var-lib-cni-bin\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180329 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwvnx\" (UniqueName: \"kubernetes.io/projected/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-kube-api-access-jwvnx\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180342 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-system-cni-dir\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180355 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-os-release\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180367 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-hostroot\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180391 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-cnibin\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-rootfs\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180420 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-var-lib-cni-multus\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-cnibin\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180446 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22790094-9e12-4de0-a0bf-5300bed8938f-cni-binary-copy\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180471 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-cni-dir\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180486 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2knq\" (UniqueName: \"kubernetes.io/projected/22790094-9e12-4de0-a0bf-5300bed8938f-kube-api-access-v2knq\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180501 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-run-multus-certs\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180516 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpdpg\" (UniqueName: \"kubernetes.io/projected/52b42062-39de-4c28-b54d-5c63d046cb6d-kube-api-access-zpdpg\") pod \"node-resolver-btjs7\" (UID: \"52b42062-39de-4c28-b54d-5c63d046cb6d\") " pod="openshift-dns/node-resolver-btjs7" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180529 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-os-release\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180543 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/369287d8-0d6d-483f-8c4b-5439ae4d065c-cni-binary-copy\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-system-cni-dir\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-system-cni-dir\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180669 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-socket-dir-parent\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180691 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-run-k8s-cni-cncf-io\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.180711 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-etc-kubernetes\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.181243 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-mcd-auth-proxy-config\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.181278 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-conf-dir\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.181303 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-var-lib-kubelet\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.181707 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-daemon-config\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.181838 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-cnibin\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.181870 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-var-lib-cni-bin\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.181934 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-rootfs\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.181984 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-run-netns\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182031 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-run-multus-certs\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182006 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-host-var-lib-cni-multus\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/52b42062-39de-4c28-b54d-5c63d046cb6d-hosts-file\") pod \"node-resolver-btjs7\" (UID: \"52b42062-39de-4c28-b54d-5c63d046cb6d\") " pod="openshift-dns/node-resolver-btjs7" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-hostroot\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182168 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-os-release\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182171 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-os-release\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182192 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-cnibin\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182243 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-system-cni-dir\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182252 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/369287d8-0d6d-483f-8c4b-5439ae4d065c-multus-cni-dir\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182320 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22790094-9e12-4de0-a0bf-5300bed8938f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182702 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22790094-9e12-4de0-a0bf-5300bed8938f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182805 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/369287d8-0d6d-483f-8c4b-5439ae4d065c-cni-binary-copy\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.182810 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22790094-9e12-4de0-a0bf-5300bed8938f-cni-binary-copy\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.183849 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.187628 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-proxy-tls\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.203381 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.208889 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz56s\" (UniqueName: \"kubernetes.io/projected/369287d8-0d6d-483f-8c4b-5439ae4d065c-kube-api-access-kz56s\") pod \"multus-vffbc\" (UID: \"369287d8-0d6d-483f-8c4b-5439ae4d065c\") " pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.216825 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2knq\" (UniqueName: \"kubernetes.io/projected/22790094-9e12-4de0-a0bf-5300bed8938f-kube-api-access-v2knq\") pod \"multus-additional-cni-plugins-qljjv\" (UID: \"22790094-9e12-4de0-a0bf-5300bed8938f\") " pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.217025 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpdpg\" (UniqueName: \"kubernetes.io/projected/52b42062-39de-4c28-b54d-5c63d046cb6d-kube-api-access-zpdpg\") pod \"node-resolver-btjs7\" (UID: \"52b42062-39de-4c28-b54d-5c63d046cb6d\") " pod="openshift-dns/node-resolver-btjs7" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.221033 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwvnx\" (UniqueName: \"kubernetes.io/projected/ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9-kube-api-access-jwvnx\") pod \"machine-config-daemon-5gm98\" (UID: \"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\") " pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.226072 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.242585 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.248413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.248451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.248462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.248479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.248491 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:49Z","lastTransitionTime":"2025-12-16T11:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.256684 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-btjs7" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.257988 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.269063 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.279979 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vffbc" Dec 16 11:55:49 crc kubenswrapper[4805]: W1216 11:55:49.286323 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac31fa32_4bce_4ac3_ba7c_e3b0da69c3b9.slice/crio-0aabb8e15b973114d3b0ae87451d39c7015018ae3abb473b42f822ec54edb5ca WatchSource:0}: Error finding container 0aabb8e15b973114d3b0ae87451d39c7015018ae3abb473b42f822ec54edb5ca: Status 404 returned error can't find the container with id 0aabb8e15b973114d3b0ae87451d39c7015018ae3abb473b42f822ec54edb5ca Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.286461 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qljjv" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.289479 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.306187 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: W1216 11:55:49.308177 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22790094_9e12_4de0_a0bf_5300bed8938f.slice/crio-716d30276c369639886eadbf7e4c252580323a3ebfc9d1bad5b9aaa28bc33ebd WatchSource:0}: Error finding container 716d30276c369639886eadbf7e4c252580323a3ebfc9d1bad5b9aaa28bc33ebd: Status 404 returned error can't find the container with id 716d30276c369639886eadbf7e4c252580323a3ebfc9d1bad5b9aaa28bc33ebd Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.336187 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.350661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.350688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.350703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.350715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.350724 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:49Z","lastTransitionTime":"2025-12-16T11:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.353124 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.362192 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lwjrh"] Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.363442 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.367305 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.367300 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.367307 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.367470 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.367530 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.367623 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.367634 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.381715 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.395229 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.408580 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.432372 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.468562 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.473431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.473472 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.473483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.473503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.473516 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:49Z","lastTransitionTime":"2025-12-16T11:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482549 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-slash\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-ovn\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482620 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-systemd\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482639 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-log-socket\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482657 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-kubelet\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482677 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-netns\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482706 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-script-lib\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482662 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482727 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-ovn-kubernetes\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482746 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-netd\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482766 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlg5j\" (UniqueName: \"kubernetes.io/projected/cb7da1ad-f74d-471f-a98f-274cef7fe393-kube-api-access-tlg5j\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482785 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-env-overrides\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482805 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovn-node-metrics-cert\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482845 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482879 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-etc-openvswitch\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482892 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-node-log\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482905 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-bin\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482922 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-var-lib-openvswitch\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.482996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-openvswitch\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.483058 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-systemd-units\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.483255 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-config\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.494677 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.509078 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.525030 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.525357 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.525401 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.525363 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:49 crc kubenswrapper[4805]: E1216 11:55:49.525467 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:55:49 crc kubenswrapper[4805]: E1216 11:55:49.525532 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:55:49 crc kubenswrapper[4805]: E1216 11:55:49.525602 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.543591 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.565543 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.578778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.578823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.578834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.578851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.578862 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:49Z","lastTransitionTime":"2025-12-16T11:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.582933 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.583967 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-slash\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584020 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-ovn\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584063 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-systemd\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-log-socket\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584124 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-kubelet\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584186 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-netns\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584220 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-script-lib\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584265 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-ovn-kubernetes\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584296 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-netd\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584348 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlg5j\" (UniqueName: \"kubernetes.io/projected/cb7da1ad-f74d-471f-a98f-274cef7fe393-kube-api-access-tlg5j\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584378 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-env-overrides\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584437 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovn-node-metrics-cert\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584547 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-etc-openvswitch\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584586 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-node-log\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584610 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-bin\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-var-lib-openvswitch\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584687 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-openvswitch\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584728 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-systemd-units\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.584766 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-config\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585552 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-slash\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585645 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-netns\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585649 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-kubelet\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585668 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-ovn\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585690 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-ovn-kubernetes\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585725 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-systemd\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585705 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-netd\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585834 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-node-log\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585861 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-log-socket\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585883 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585906 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-etc-openvswitch\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.585925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-var-lib-openvswitch\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.586004 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-config\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.586174 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-openvswitch\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.586216 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-bin\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.586253 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-systemd-units\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.586286 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-env-overrides\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.586454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-script-lib\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.590680 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovn-node-metrics-cert\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.602974 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlg5j\" (UniqueName: \"kubernetes.io/projected/cb7da1ad-f74d-471f-a98f-274cef7fe393-kube-api-access-tlg5j\") pod \"ovnkube-node-lwjrh\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.607779 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.630033 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.640182 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vffbc" event={"ID":"369287d8-0d6d-483f-8c4b-5439ae4d065c","Type":"ContainerStarted","Data":"a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.640243 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vffbc" event={"ID":"369287d8-0d6d-483f-8c4b-5439ae4d065c","Type":"ContainerStarted","Data":"1d5f2c48253a1b7455af4101197bb2c03d44a84658704342bc4bfd5cb25df4d4"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.641423 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" event={"ID":"22790094-9e12-4de0-a0bf-5300bed8938f","Type":"ContainerStarted","Data":"cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.641483 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" event={"ID":"22790094-9e12-4de0-a0bf-5300bed8938f","Type":"ContainerStarted","Data":"716d30276c369639886eadbf7e4c252580323a3ebfc9d1bad5b9aaa28bc33ebd"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.642882 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.642930 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.642943 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"0aabb8e15b973114d3b0ae87451d39c7015018ae3abb473b42f822ec54edb5ca"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.644307 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-btjs7" event={"ID":"52b42062-39de-4c28-b54d-5c63d046cb6d","Type":"ContainerStarted","Data":"230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.644371 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-btjs7" event={"ID":"52b42062-39de-4c28-b54d-5c63d046cb6d","Type":"ContainerStarted","Data":"4b38ad63226bca2b2180aa53bf0504a59fc3146ca261e83ab6972aa3dead7f04"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.663686 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.679026 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.680859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.680903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.680921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.680938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.680950 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:49Z","lastTransitionTime":"2025-12-16T11:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.686340 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.695981 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.707112 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.721576 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.736322 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.753200 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.770792 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.783188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.783227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.783240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.783255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.783267 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:49Z","lastTransitionTime":"2025-12-16T11:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.797901 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.816128 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.829063 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.843047 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.854358 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.870254 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.884076 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.885795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.885850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.885859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.885873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.885882 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:49Z","lastTransitionTime":"2025-12-16T11:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.899589 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.916929 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.932228 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.948113 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.968065 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.987998 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.988397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.988418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.988426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.988439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:49 crc kubenswrapper[4805]: I1216 11:55:49.988447 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:49Z","lastTransitionTime":"2025-12-16T11:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.001618 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:49Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.014117 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.025383 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.034996 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.052455 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.071283 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.093576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.093617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.093762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.093781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.093791 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:50Z","lastTransitionTime":"2025-12-16T11:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.098590 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.196465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.196496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.196504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.196516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.196525 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:50Z","lastTransitionTime":"2025-12-16T11:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.298614 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.298656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.298667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.298684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.298695 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:50Z","lastTransitionTime":"2025-12-16T11:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.401428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.401485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.401504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.401527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.401544 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:50Z","lastTransitionTime":"2025-12-16T11:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.504019 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.504721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.504803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.504917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.505024 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:50Z","lastTransitionTime":"2025-12-16T11:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.606871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.606915 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.606927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.606942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.606953 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:50Z","lastTransitionTime":"2025-12-16T11:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.648123 4805 generic.go:334] "Generic (PLEG): container finished" podID="22790094-9e12-4de0-a0bf-5300bed8938f" containerID="cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255" exitCode=0 Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.648225 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" event={"ID":"22790094-9e12-4de0-a0bf-5300bed8938f","Type":"ContainerDied","Data":"cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.651522 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99" exitCode=0 Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.651560 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.651586 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"e5d3d11761d9ed9309f82d1141e5396f91bd9f21427e53c1ce87fcb15f1b577a"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.671018 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.691165 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.704972 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.713309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.713346 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.713358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.713375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.713387 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:50Z","lastTransitionTime":"2025-12-16T11:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.722860 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.735192 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.746015 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.758498 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.773092 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.788707 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.801602 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.820028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.820068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.820080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.820098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.820111 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:50Z","lastTransitionTime":"2025-12-16T11:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.824856 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.840682 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.853181 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.866047 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.884410 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.902284 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.915366 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.922290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.922478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.922628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.922717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.922786 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:50Z","lastTransitionTime":"2025-12-16T11:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.926817 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.938915 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.964437 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.977365 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:50 crc kubenswrapper[4805]: I1216 11:55:50.995251 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:50Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.017352 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.025405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.025442 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.025455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.025473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.025485 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:51Z","lastTransitionTime":"2025-12-16T11:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.056774 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.092109 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.127879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.127914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.127924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.127938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.127950 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:51Z","lastTransitionTime":"2025-12-16T11:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.133826 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.176059 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.219460 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.229904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.229940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.229948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.229972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.229983 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:51Z","lastTransitionTime":"2025-12-16T11:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.331885 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.331909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.331917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.331929 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.331939 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:51Z","lastTransitionTime":"2025-12-16T11:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.402603 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.402709 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:55:59.402688391 +0000 UTC m=+33.120946196 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.402749 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.402781 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.402803 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.402839 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.402912 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.402933 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.402977 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:59.402966339 +0000 UTC m=+33.121224144 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.402992 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:59.40298513 +0000 UTC m=+33.121242935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.403003 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.403033 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.403032 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.403064 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.403077 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.403116 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:59.403104113 +0000 UTC m=+33.121361928 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.403044 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.403177 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-16 11:55:59.403168605 +0000 UTC m=+33.121426420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.434811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.434858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.434868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.434882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.434892 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:51Z","lastTransitionTime":"2025-12-16T11:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.521756 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.521756 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.521811 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.522247 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.522313 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:55:51 crc kubenswrapper[4805]: E1216 11:55:51.522460 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.537838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.537888 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.537917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.537938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.537948 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:51Z","lastTransitionTime":"2025-12-16T11:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.641338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.641393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.641410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.641429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.641442 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:51Z","lastTransitionTime":"2025-12-16T11:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.658005 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-br4tl"] Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.658360 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-br4tl" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.660430 4805 generic.go:334] "Generic (PLEG): container finished" podID="22790094-9e12-4de0-a0bf-5300bed8938f" containerID="1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af" exitCode=0 Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.660572 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" event={"ID":"22790094-9e12-4de0-a0bf-5300bed8938f","Type":"ContainerDied","Data":"1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.663450 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.664106 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.664581 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.664602 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.673742 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.673830 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.673846 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.673858 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.673869 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.673881 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.697813 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.715484 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.726973 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.740199 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.747886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.747929 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.747940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.747957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.747973 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:51Z","lastTransitionTime":"2025-12-16T11:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.758480 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.771071 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.787226 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.802018 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.806760 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3f3f7246-ff42-4e43-a0fd-89ec30096be2-serviceca\") pod \"node-ca-br4tl\" (UID: \"3f3f7246-ff42-4e43-a0fd-89ec30096be2\") " pod="openshift-image-registry/node-ca-br4tl" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.806829 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f3f7246-ff42-4e43-a0fd-89ec30096be2-host\") pod \"node-ca-br4tl\" (UID: \"3f3f7246-ff42-4e43-a0fd-89ec30096be2\") " pod="openshift-image-registry/node-ca-br4tl" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.806861 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24cbj\" (UniqueName: \"kubernetes.io/projected/3f3f7246-ff42-4e43-a0fd-89ec30096be2-kube-api-access-24cbj\") pod \"node-ca-br4tl\" (UID: \"3f3f7246-ff42-4e43-a0fd-89ec30096be2\") " pod="openshift-image-registry/node-ca-br4tl" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.815121 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.824688 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.846105 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.850093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.850117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.850125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.850154 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.850163 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:51Z","lastTransitionTime":"2025-12-16T11:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.858222 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.870925 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.885059 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.897169 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.908186 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3f3f7246-ff42-4e43-a0fd-89ec30096be2-serviceca\") pod \"node-ca-br4tl\" (UID: \"3f3f7246-ff42-4e43-a0fd-89ec30096be2\") " pod="openshift-image-registry/node-ca-br4tl" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.908301 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f3f7246-ff42-4e43-a0fd-89ec30096be2-host\") pod \"node-ca-br4tl\" (UID: \"3f3f7246-ff42-4e43-a0fd-89ec30096be2\") " pod="openshift-image-registry/node-ca-br4tl" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.908319 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24cbj\" (UniqueName: \"kubernetes.io/projected/3f3f7246-ff42-4e43-a0fd-89ec30096be2-kube-api-access-24cbj\") pod \"node-ca-br4tl\" (UID: \"3f3f7246-ff42-4e43-a0fd-89ec30096be2\") " pod="openshift-image-registry/node-ca-br4tl" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.908560 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f3f7246-ff42-4e43-a0fd-89ec30096be2-host\") pod \"node-ca-br4tl\" (UID: \"3f3f7246-ff42-4e43-a0fd-89ec30096be2\") " pod="openshift-image-registry/node-ca-br4tl" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.909404 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3f3f7246-ff42-4e43-a0fd-89ec30096be2-serviceca\") pod \"node-ca-br4tl\" (UID: \"3f3f7246-ff42-4e43-a0fd-89ec30096be2\") " pod="openshift-image-registry/node-ca-br4tl" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.941720 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24cbj\" (UniqueName: \"kubernetes.io/projected/3f3f7246-ff42-4e43-a0fd-89ec30096be2-kube-api-access-24cbj\") pod \"node-ca-br4tl\" (UID: \"3f3f7246-ff42-4e43-a0fd-89ec30096be2\") " pod="openshift-image-registry/node-ca-br4tl" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.953006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.953046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.953058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.953074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.953086 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:51Z","lastTransitionTime":"2025-12-16T11:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.957056 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.974681 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-br4tl" Dec 16 11:55:51 crc kubenswrapper[4805]: W1216 11:55:51.989676 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f3f7246_ff42_4e43_a0fd_89ec30096be2.slice/crio-3f44c3fa0b05e895c498cc7d3be1c3e3619d58f9e72c5bdaf85f08e2b07dc23d WatchSource:0}: Error finding container 3f44c3fa0b05e895c498cc7d3be1c3e3619d58f9e72c5bdaf85f08e2b07dc23d: Status 404 returned error can't find the container with id 3f44c3fa0b05e895c498cc7d3be1c3e3619d58f9e72c5bdaf85f08e2b07dc23d Dec 16 11:55:51 crc kubenswrapper[4805]: I1216 11:55:51.997861 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:51Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.032968 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.055575 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.055602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.055610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.055622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.055632 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:52Z","lastTransitionTime":"2025-12-16T11:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.071817 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.113969 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.157639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.157675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.157686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.157702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.157711 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:52Z","lastTransitionTime":"2025-12-16T11:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.161382 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.197306 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.235747 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.261602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.261649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.261661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.261678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.261687 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:52Z","lastTransitionTime":"2025-12-16T11:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.273980 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.312946 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.357874 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.365710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.365747 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.365757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.365772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.365782 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:52Z","lastTransitionTime":"2025-12-16T11:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.395271 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.436268 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.468684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.469067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.469162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.469326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.469408 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:52Z","lastTransitionTime":"2025-12-16T11:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.476679 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.518574 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.572122 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.572201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.572220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.572241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.572256 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:52Z","lastTransitionTime":"2025-12-16T11:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.675761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.675806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.675815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.675835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.675846 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:52Z","lastTransitionTime":"2025-12-16T11:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.686520 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-br4tl" event={"ID":"3f3f7246-ff42-4e43-a0fd-89ec30096be2","Type":"ContainerStarted","Data":"d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.686597 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-br4tl" event={"ID":"3f3f7246-ff42-4e43-a0fd-89ec30096be2","Type":"ContainerStarted","Data":"3f44c3fa0b05e895c498cc7d3be1c3e3619d58f9e72c5bdaf85f08e2b07dc23d"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.690719 4805 generic.go:334] "Generic (PLEG): container finished" podID="22790094-9e12-4de0-a0bf-5300bed8938f" containerID="116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c" exitCode=0 Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.690766 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" event={"ID":"22790094-9e12-4de0-a0bf-5300bed8938f","Type":"ContainerDied","Data":"116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.709428 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.730669 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.746130 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.761696 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.776608 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.778862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.778899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.778909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.778925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.778935 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:52Z","lastTransitionTime":"2025-12-16T11:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.791090 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.807657 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.834446 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.874211 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.881041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.881092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.881103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.881125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.881177 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:52Z","lastTransitionTime":"2025-12-16T11:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.916310 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.959075 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:52Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.984631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.984699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.984722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.984752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:52 crc kubenswrapper[4805]: I1216 11:55:52.984774 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:52Z","lastTransitionTime":"2025-12-16T11:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.009276 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.044554 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.079448 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.087502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.087559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.087569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.087589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.087601 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:53Z","lastTransitionTime":"2025-12-16T11:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.124219 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.190427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.190475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.190486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.190501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.190511 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:53Z","lastTransitionTime":"2025-12-16T11:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.297892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.297940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.297951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.297963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.297972 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:53Z","lastTransitionTime":"2025-12-16T11:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.400871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.401112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.401227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.401323 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.401403 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:53Z","lastTransitionTime":"2025-12-16T11:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.504571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.504617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.504631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.504648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.504660 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:53Z","lastTransitionTime":"2025-12-16T11:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.521950 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.522024 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.522023 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:53 crc kubenswrapper[4805]: E1216 11:55:53.522177 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:55:53 crc kubenswrapper[4805]: E1216 11:55:53.522313 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:55:53 crc kubenswrapper[4805]: E1216 11:55:53.522437 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.607371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.607401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.607409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.607422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.607432 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:53Z","lastTransitionTime":"2025-12-16T11:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.697620 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.699717 4805 generic.go:334] "Generic (PLEG): container finished" podID="22790094-9e12-4de0-a0bf-5300bed8938f" containerID="069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150" exitCode=0 Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.699775 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" event={"ID":"22790094-9e12-4de0-a0bf-5300bed8938f","Type":"ContainerDied","Data":"069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.709868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.709893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.709901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.709912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.709921 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:53Z","lastTransitionTime":"2025-12-16T11:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.724891 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.834314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.834341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.834351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.834366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.834377 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:53Z","lastTransitionTime":"2025-12-16T11:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.836265 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.848427 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.888395 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.901633 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.915588 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.931365 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.937267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.937301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.937310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.937323 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.937334 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:53Z","lastTransitionTime":"2025-12-16T11:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.947382 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.961755 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.980249 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:53 crc kubenswrapper[4805]: I1216 11:55:53.994974 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:53Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.010603 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.024409 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.039995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.040278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.040374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.040468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.039963 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.040693 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:54Z","lastTransitionTime":"2025-12-16T11:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.057073 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.079341 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.090826 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.114656 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.128585 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.140562 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.147905 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.147963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.147976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.147995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.148006 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:54Z","lastTransitionTime":"2025-12-16T11:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.158931 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.177601 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.191362 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.204852 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.216676 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.226805 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.250898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.251156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.251275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.251338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.251391 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:54Z","lastTransitionTime":"2025-12-16T11:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.252059 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.267239 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.291179 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.313568 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.355436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.355486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.355500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.355524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.355536 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:54Z","lastTransitionTime":"2025-12-16T11:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.462644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.462684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.462693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.462708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.462717 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:54Z","lastTransitionTime":"2025-12-16T11:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.564776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.564812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.564822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.564839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.564851 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:54Z","lastTransitionTime":"2025-12-16T11:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.666844 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.666873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.666881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.666894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.666903 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:54Z","lastTransitionTime":"2025-12-16T11:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.705640 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" event={"ID":"22790094-9e12-4de0-a0bf-5300bed8938f","Type":"ContainerStarted","Data":"0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67"} Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.721434 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.734539 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.745714 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.756667 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.768925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.768977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.768988 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.769005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.769018 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:54Z","lastTransitionTime":"2025-12-16T11:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.769502 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.786396 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.798348 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.811356 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.822673 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.834678 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.854702 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.870677 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.871217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.871248 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.871260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.871277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.871287 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:54Z","lastTransitionTime":"2025-12-16T11:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.884374 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.899385 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.914695 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:54Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.974522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.974621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.974647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.974672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:54 crc kubenswrapper[4805]: I1216 11:55:54.974688 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:54Z","lastTransitionTime":"2025-12-16T11:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.077014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.077060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.077077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.077097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.077109 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:55Z","lastTransitionTime":"2025-12-16T11:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.179167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.179207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.179218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.179234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.179244 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:55Z","lastTransitionTime":"2025-12-16T11:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.281714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.281755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.281769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.281791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.281805 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:55Z","lastTransitionTime":"2025-12-16T11:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.385306 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.385340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.385347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.385384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.385397 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:55Z","lastTransitionTime":"2025-12-16T11:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.487771 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.488368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.488394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.488425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.488440 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:55Z","lastTransitionTime":"2025-12-16T11:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.522273 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.522354 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.522273 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:55 crc kubenswrapper[4805]: E1216 11:55:55.522398 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:55:55 crc kubenswrapper[4805]: E1216 11:55:55.522477 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:55:55 crc kubenswrapper[4805]: E1216 11:55:55.522540 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.591502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.591539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.591551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.591568 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.591583 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:55Z","lastTransitionTime":"2025-12-16T11:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.694212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.694447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.694548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.694625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.694707 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:55Z","lastTransitionTime":"2025-12-16T11:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.710466 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.711659 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.711710 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.714621 4805 generic.go:334] "Generic (PLEG): container finished" podID="22790094-9e12-4de0-a0bf-5300bed8938f" containerID="0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67" exitCode=0 Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.714659 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" event={"ID":"22790094-9e12-4de0-a0bf-5300bed8938f","Type":"ContainerDied","Data":"0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.723258 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.745284 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.745704 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.760257 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.765798 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.779071 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.795659 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.796889 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.796911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.796920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.796934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.796944 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:55Z","lastTransitionTime":"2025-12-16T11:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.806677 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.816778 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.827615 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.839733 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.854910 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.868776 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.881812 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.896470 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.901513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.901554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.901565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.901583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.901593 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:55Z","lastTransitionTime":"2025-12-16T11:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.916906 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.929177 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.943472 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.959185 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.973524 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.987543 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:55 crc kubenswrapper[4805]: I1216 11:55:55.998686 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:55Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.003167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.003206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.003215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.003229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.003240 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:56Z","lastTransitionTime":"2025-12-16T11:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.023111 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.039564 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.120236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.120286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.120298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.120316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.120327 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:56Z","lastTransitionTime":"2025-12-16T11:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.120451 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.135242 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.146135 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.162992 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.174954 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.189365 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.200078 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.212504 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.222310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.222349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.222361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.222379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.222393 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:56Z","lastTransitionTime":"2025-12-16T11:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.325239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.325271 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.325281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.325295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.325306 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:56Z","lastTransitionTime":"2025-12-16T11:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.427472 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.427680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.427784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.427860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.427949 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:56Z","lastTransitionTime":"2025-12-16T11:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.530401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.530448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.530459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.530477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.530489 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:56Z","lastTransitionTime":"2025-12-16T11:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.536392 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.558224 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.569711 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.579360 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.632808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.632854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.632866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.632883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.632896 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:56Z","lastTransitionTime":"2025-12-16T11:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.660875 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.674577 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.688905 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.710364 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.724840 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" event={"ID":"22790094-9e12-4de0-a0bf-5300bed8938f","Type":"ContainerStarted","Data":"aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d"} Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.724903 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.726291 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.736067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.736102 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.736111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.736125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:56 crc kubenswrapper[4805]: I1216 11:55:56.736134 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:56Z","lastTransitionTime":"2025-12-16T11:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:56.741442 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:56.755009 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:56.764244 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:56.776770 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:56.791168 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:56.804866 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:56Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.073782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.073842 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.073860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.073884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.073901 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: E1216 11:55:57.088481 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.094124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.094179 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.094192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.094210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.094223 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: E1216 11:55:57.109467 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.113701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.113741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.113754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.113770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.113781 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: E1216 11:55:57.126903 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.130256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.130381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.130455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.130541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.130622 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: E1216 11:55:57.144678 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.148674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.148777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.148790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.148813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.148830 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: E1216 11:55:57.161062 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: E1216 11:55:57.161209 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.162737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.162767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.162780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.162815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.162825 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.265511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.265540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.265552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.265568 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.265579 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.370905 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.370957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.370981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.371025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.371049 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.473893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.473932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.473943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.473959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.473971 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.522398 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.522446 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:57 crc kubenswrapper[4805]: E1216 11:55:57.522508 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.522467 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:57 crc kubenswrapper[4805]: E1216 11:55:57.522656 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:55:57 crc kubenswrapper[4805]: E1216 11:55:57.522751 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.576812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.576885 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.576902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.576922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.576934 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.679433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.679483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.679494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.679510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.679521 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.736105 4805 generic.go:334] "Generic (PLEG): container finished" podID="22790094-9e12-4de0-a0bf-5300bed8938f" containerID="aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d" exitCode=0 Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.736193 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" event={"ID":"22790094-9e12-4de0-a0bf-5300bed8938f","Type":"ContainerDied","Data":"aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d"} Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.736281 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.754054 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.775729 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.782247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.782277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.782289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.782596 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.782614 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.807960 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.823792 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.837626 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.852493 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.873245 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.885841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.885871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.885879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.885893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.885903 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.889120 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.904123 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.936432 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.979252 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.987961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.987998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.988008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.988024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.988036 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:57Z","lastTransitionTime":"2025-12-16T11:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.990851 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:57 crc kubenswrapper[4805]: I1216 11:55:57.999816 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:57Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.010478 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.022015 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.042833 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.055385 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.068633 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.089808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.089834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.089843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.089861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.089878 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:58Z","lastTransitionTime":"2025-12-16T11:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.097695 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.108988 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.121638 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.135025 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.149043 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.164822 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.179920 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.192513 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.192779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.192820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.192831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.192848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.192866 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:58Z","lastTransitionTime":"2025-12-16T11:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.201875 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.220351 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.232448 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.242939 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.254565 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.295426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.295470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.295482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.295498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.295507 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:58Z","lastTransitionTime":"2025-12-16T11:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.397519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.397575 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.397584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.397596 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.397628 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:58Z","lastTransitionTime":"2025-12-16T11:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.500927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.500971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.500982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.500999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.501011 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:58Z","lastTransitionTime":"2025-12-16T11:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.604125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.604205 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.604228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.604249 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.604265 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:58Z","lastTransitionTime":"2025-12-16T11:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.706430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.706495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.706511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.706537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.706556 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:58Z","lastTransitionTime":"2025-12-16T11:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.743432 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" event={"ID":"22790094-9e12-4de0-a0bf-5300bed8938f","Type":"ContainerStarted","Data":"4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c"} Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.762353 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.774613 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.785897 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.796952 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.806956 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.808406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.808459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.808470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.808486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.808495 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:58Z","lastTransitionTime":"2025-12-16T11:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.820597 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.830299 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.840498 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.855554 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.874922 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.887023 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.899416 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.911084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.911137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.911166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.911187 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.911199 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:58Z","lastTransitionTime":"2025-12-16T11:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.911269 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.921023 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:58 crc kubenswrapper[4805]: I1216 11:55:58.931522 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:55:58Z is after 2025-08-24T17:21:41Z" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.013745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.013792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.013802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.013831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.013839 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:59Z","lastTransitionTime":"2025-12-16T11:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.116486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.116546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.116556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.116605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.116623 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:59Z","lastTransitionTime":"2025-12-16T11:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.218917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.218965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.218976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.218993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.219006 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:59Z","lastTransitionTime":"2025-12-16T11:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.321341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.321406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.321427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.321454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.321475 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:59Z","lastTransitionTime":"2025-12-16T11:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.423909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.423967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.423983 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.424005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.424021 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:59Z","lastTransitionTime":"2025-12-16T11:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.493161 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.493261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.493284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.493301 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.493336 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493360 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:56:15.493335275 +0000 UTC m=+49.211593080 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493414 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493461 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:15.493447559 +0000 UTC m=+49.211705364 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493468 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493482 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493491 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493534 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:15.493514581 +0000 UTC m=+49.211772386 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493575 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493595 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:15.493589813 +0000 UTC m=+49.211847618 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493645 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493653 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493659 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.493680 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:15.493671945 +0000 UTC m=+49.211929750 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.521813 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.521831 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.521939 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.521994 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.522118 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:55:59 crc kubenswrapper[4805]: E1216 11:55:59.522236 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.527079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.527127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.527153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.527171 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.527185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:59Z","lastTransitionTime":"2025-12-16T11:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.629572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.629636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.629648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.629667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.629681 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:59Z","lastTransitionTime":"2025-12-16T11:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.731524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.731554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.731567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.731582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.731601 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:59Z","lastTransitionTime":"2025-12-16T11:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.833899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.833928 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.833936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.833948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.833957 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:59Z","lastTransitionTime":"2025-12-16T11:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.937044 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.937077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.937085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.937097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:55:59 crc kubenswrapper[4805]: I1216 11:55:59.937106 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:55:59Z","lastTransitionTime":"2025-12-16T11:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.039897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.039938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.039949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.039963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.039974 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:00Z","lastTransitionTime":"2025-12-16T11:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.142591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.142644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.142654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.142708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.142732 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:00Z","lastTransitionTime":"2025-12-16T11:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.245213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.245256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.245265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.245280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.245289 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:00Z","lastTransitionTime":"2025-12-16T11:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.347740 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.347797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.347814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.347868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.347886 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:00Z","lastTransitionTime":"2025-12-16T11:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.450116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.450168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.450177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.450192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.450201 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:00Z","lastTransitionTime":"2025-12-16T11:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.552438 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:00 crc kubenswrapper[4805]: E1216 11:56:00.552590 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.553820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.553862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.553876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.553892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.553904 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:00Z","lastTransitionTime":"2025-12-16T11:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.655987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.656023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.656034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.656060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.656070 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:00Z","lastTransitionTime":"2025-12-16T11:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.759006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.759087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.759106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.759163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.759179 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:00Z","lastTransitionTime":"2025-12-16T11:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.861869 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.862198 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.862213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.862228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.862256 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:00Z","lastTransitionTime":"2025-12-16T11:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.965687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.965721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.965733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.965749 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:00 crc kubenswrapper[4805]: I1216 11:56:00.965760 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:00Z","lastTransitionTime":"2025-12-16T11:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.068207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.068241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.068250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.068264 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.068273 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:01Z","lastTransitionTime":"2025-12-16T11:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.170779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.170831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.170840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.170862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.170873 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:01Z","lastTransitionTime":"2025-12-16T11:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.272959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.272998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.273009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.273027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.273044 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:01Z","lastTransitionTime":"2025-12-16T11:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.374856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.374912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.374924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.374940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.374957 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:01Z","lastTransitionTime":"2025-12-16T11:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.477883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.477928 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.477943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.477962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.477977 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:01Z","lastTransitionTime":"2025-12-16T11:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.522108 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.522171 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:01 crc kubenswrapper[4805]: E1216 11:56:01.522261 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:01 crc kubenswrapper[4805]: E1216 11:56:01.522342 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.580284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.580349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.580362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.580382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.580395 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:01Z","lastTransitionTime":"2025-12-16T11:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.682763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.682796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.682806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.682823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.682833 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:01Z","lastTransitionTime":"2025-12-16T11:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.752128 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g"] Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.752636 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.756030 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.756272 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.767397 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.777900 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.785087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.785123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.785131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.785158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.785169 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:01Z","lastTransitionTime":"2025-12-16T11:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.787696 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.798946 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.811188 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.823022 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.835860 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.853250 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.865916 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.867382 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/deff7b04-18d2-47cb-ae70-e5b2b70367bf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.867447 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/deff7b04-18d2-47cb-ae70-e5b2b70367bf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.867477 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsdg8\" (UniqueName: \"kubernetes.io/projected/deff7b04-18d2-47cb-ae70-e5b2b70367bf-kube-api-access-vsdg8\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.867533 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/deff7b04-18d2-47cb-ae70-e5b2b70367bf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.878257 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.887011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.887046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.887055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.887072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.887083 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:01Z","lastTransitionTime":"2025-12-16T11:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.891588 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.904931 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.917318 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.930384 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.950281 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.962655 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:01Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.968201 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/deff7b04-18d2-47cb-ae70-e5b2b70367bf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.968251 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/deff7b04-18d2-47cb-ae70-e5b2b70367bf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.968268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/deff7b04-18d2-47cb-ae70-e5b2b70367bf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.968284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsdg8\" (UniqueName: \"kubernetes.io/projected/deff7b04-18d2-47cb-ae70-e5b2b70367bf-kube-api-access-vsdg8\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.968738 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/deff7b04-18d2-47cb-ae70-e5b2b70367bf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.968995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/deff7b04-18d2-47cb-ae70-e5b2b70367bf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.972905 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/deff7b04-18d2-47cb-ae70-e5b2b70367bf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.984808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsdg8\" (UniqueName: \"kubernetes.io/projected/deff7b04-18d2-47cb-ae70-e5b2b70367bf-kube-api-access-vsdg8\") pod \"ovnkube-control-plane-749d76644c-zl99g\" (UID: \"deff7b04-18d2-47cb-ae70-e5b2b70367bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.988870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.989004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.989075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.989156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:01 crc kubenswrapper[4805]: I1216 11:56:01.989214 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:01Z","lastTransitionTime":"2025-12-16T11:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.064515 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.091159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.091182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.091190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.091202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.091210 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:02Z","lastTransitionTime":"2025-12-16T11:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.193256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.193283 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.193292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.193303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.193312 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:02Z","lastTransitionTime":"2025-12-16T11:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.296839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.296865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.296877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.297347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.297374 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:02Z","lastTransitionTime":"2025-12-16T11:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.399668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.399700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.399708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.399724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.399733 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:02Z","lastTransitionTime":"2025-12-16T11:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.502052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.502302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.502398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.502481 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.502556 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:02Z","lastTransitionTime":"2025-12-16T11:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.522581 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:02 crc kubenswrapper[4805]: E1216 11:56:02.522687 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.605043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.605093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.605105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.605119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.605129 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:02Z","lastTransitionTime":"2025-12-16T11:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.707373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.707432 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.707446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.707466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.707481 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:02Z","lastTransitionTime":"2025-12-16T11:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.757527 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovnkube-controller/0.log" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.760004 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5" exitCode=1 Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.760058 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.760859 4805 scope.go:117] "RemoveContainer" containerID="16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.761659 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" event={"ID":"deff7b04-18d2-47cb-ae70-e5b2b70367bf","Type":"ContainerStarted","Data":"9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.761703 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" event={"ID":"deff7b04-18d2-47cb-ae70-e5b2b70367bf","Type":"ContainerStarted","Data":"60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.761718 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" event={"ID":"deff7b04-18d2-47cb-ae70-e5b2b70367bf","Type":"ContainerStarted","Data":"133caeb21424c593460b96ae8d3c83f378d6f81b780f2116e9188ed0da16b0fc"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.786020 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.801855 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.810868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.810904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.810913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.810964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.810976 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:02Z","lastTransitionTime":"2025-12-16T11:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.818009 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.822930 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-ct6d8"] Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.823503 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:02 crc kubenswrapper[4805]: E1216 11:56:02.823639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.829576 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.842891 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.861917 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.874420 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.889278 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.900892 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.914536 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.914684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.914729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.914742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.914758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.914772 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:02Z","lastTransitionTime":"2025-12-16T11:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.926214 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.939159 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.949833 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.961834 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.976770 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.976822 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb6q4\" (UniqueName: \"kubernetes.io/projected/2cc0dc93-5341-4b1e-840d-2c0c951ff142-kube-api-access-pb6q4\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.979117 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:02 crc kubenswrapper[4805]: I1216 11:56:02.998067 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1216 11:56:01.189577 6024 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1216 11:56:01.189654 6024 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1216 11:56:01.189693 6024 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1216 11:56:01.189715 6024 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1216 11:56:01.189732 6024 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1216 11:56:01.189723 6024 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1216 11:56:01.189799 6024 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1216 11:56:01.189800 6024 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1216 11:56:01.189824 6024 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1216 11:56:01.189830 6024 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1216 11:56:01.189860 6024 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1216 11:56:01.189876 6024 factory.go:656] Stopping watch factory\\\\nI1216 11:56:01.189910 6024 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1216 11:56:01.189953 6024 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:01.189884 6024 handler.go:208] Removed *v1.Node event handler 7\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:02Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.017483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.017530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.017540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.017556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.017567 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:03Z","lastTransitionTime":"2025-12-16T11:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.021104 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.034625 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.047702 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.059027 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.074250 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.077332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.077374 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb6q4\" (UniqueName: \"kubernetes.io/projected/2cc0dc93-5341-4b1e-840d-2c0c951ff142-kube-api-access-pb6q4\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:03 crc kubenswrapper[4805]: E1216 11:56:03.077473 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:03 crc kubenswrapper[4805]: E1216 11:56:03.077529 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs podName:2cc0dc93-5341-4b1e-840d-2c0c951ff142 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:03.57751426 +0000 UTC m=+37.295772055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs") pod "network-metrics-daemon-ct6d8" (UID: "2cc0dc93-5341-4b1e-840d-2c0c951ff142") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.086848 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.102525 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.103345 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb6q4\" (UniqueName: \"kubernetes.io/projected/2cc0dc93-5341-4b1e-840d-2c0c951ff142-kube-api-access-pb6q4\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.115943 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.120238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.120273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.120281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.120294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.120302 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:03Z","lastTransitionTime":"2025-12-16T11:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.128026 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.139192 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.153863 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.171276 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1216 11:56:01.189577 6024 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1216 11:56:01.189654 6024 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1216 11:56:01.189693 6024 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1216 11:56:01.189715 6024 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1216 11:56:01.189732 6024 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1216 11:56:01.189723 6024 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1216 11:56:01.189799 6024 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1216 11:56:01.189800 6024 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1216 11:56:01.189824 6024 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1216 11:56:01.189830 6024 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1216 11:56:01.189860 6024 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1216 11:56:01.189876 6024 factory.go:656] Stopping watch factory\\\\nI1216 11:56:01.189910 6024 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1216 11:56:01.189953 6024 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:01.189884 6024 handler.go:208] Removed *v1.Node event handler 7\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.183051 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.200758 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.222471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.222705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.222808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.222485 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.222886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.223066 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:03Z","lastTransitionTime":"2025-12-16T11:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.232170 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.243791 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.324673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.324717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.324729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.324745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.324755 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:03Z","lastTransitionTime":"2025-12-16T11:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.427180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.427236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.427246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.427259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.427269 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:03Z","lastTransitionTime":"2025-12-16T11:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.521861 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.521917 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:03 crc kubenswrapper[4805]: E1216 11:56:03.522002 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:03 crc kubenswrapper[4805]: E1216 11:56:03.522110 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.529527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.529576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.529587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.529605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.529618 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:03Z","lastTransitionTime":"2025-12-16T11:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.582496 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:03 crc kubenswrapper[4805]: E1216 11:56:03.582612 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:03 crc kubenswrapper[4805]: E1216 11:56:03.582668 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs podName:2cc0dc93-5341-4b1e-840d-2c0c951ff142 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:04.58265372 +0000 UTC m=+38.300911525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs") pod "network-metrics-daemon-ct6d8" (UID: "2cc0dc93-5341-4b1e-840d-2c0c951ff142") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.631064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.631090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.631099 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.631114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.631122 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:03Z","lastTransitionTime":"2025-12-16T11:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.732692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.732726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.732742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.732760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.732770 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:03Z","lastTransitionTime":"2025-12-16T11:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.766552 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovnkube-controller/0.log" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.769637 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783"} Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.769783 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.781264 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.792836 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.805031 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.816284 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.827581 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.834752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.834788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.834796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.834811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.834829 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:03Z","lastTransitionTime":"2025-12-16T11:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.839403 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.853113 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.868509 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.887620 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1216 11:56:01.189577 6024 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1216 11:56:01.189654 6024 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1216 11:56:01.189693 6024 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1216 11:56:01.189715 6024 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1216 11:56:01.189732 6024 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1216 11:56:01.189723 6024 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1216 11:56:01.189799 6024 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1216 11:56:01.189800 6024 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1216 11:56:01.189824 6024 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1216 11:56:01.189830 6024 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1216 11:56:01.189860 6024 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1216 11:56:01.189876 6024 factory.go:656] Stopping watch factory\\\\nI1216 11:56:01.189910 6024 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1216 11:56:01.189953 6024 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:01.189884 6024 handler.go:208] Removed *v1.Node event handler 7\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.898996 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.914566 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.928155 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.937050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.937091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.937103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.937185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.937217 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:03Z","lastTransitionTime":"2025-12-16T11:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.944259 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.955964 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:03 crc kubenswrapper[4805]: I1216 11:56:03.972157 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:03Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.015441 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.031297 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.039968 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.040027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.040043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.040060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.040074 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:04Z","lastTransitionTime":"2025-12-16T11:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.142686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.142721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.142729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.142742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.142753 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:04Z","lastTransitionTime":"2025-12-16T11:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.244645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.244680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.244696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.244711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.244721 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:04Z","lastTransitionTime":"2025-12-16T11:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.347345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.347613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.347623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.347641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.347651 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:04Z","lastTransitionTime":"2025-12-16T11:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.449807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.449882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.449900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.449924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.449943 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:04Z","lastTransitionTime":"2025-12-16T11:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.521826 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.521958 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:04 crc kubenswrapper[4805]: E1216 11:56:04.521969 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:04 crc kubenswrapper[4805]: E1216 11:56:04.522285 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.552953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.553214 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.553338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.553411 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.553475 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:04Z","lastTransitionTime":"2025-12-16T11:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.596353 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:04 crc kubenswrapper[4805]: E1216 11:56:04.596785 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:04 crc kubenswrapper[4805]: E1216 11:56:04.597584 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs podName:2cc0dc93-5341-4b1e-840d-2c0c951ff142 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:06.597551776 +0000 UTC m=+40.315809621 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs") pod "network-metrics-daemon-ct6d8" (UID: "2cc0dc93-5341-4b1e-840d-2c0c951ff142") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.656043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.656081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.656092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.656109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.656119 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:04Z","lastTransitionTime":"2025-12-16T11:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.758677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.758727 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.758740 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.758756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.758768 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:04Z","lastTransitionTime":"2025-12-16T11:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.774280 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovnkube-controller/1.log" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.775014 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovnkube-controller/0.log" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.777745 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783" exitCode=1 Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.777777 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783"} Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.777820 4805 scope.go:117] "RemoveContainer" containerID="16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.778730 4805 scope.go:117] "RemoveContainer" containerID="e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783" Dec 16 11:56:04 crc kubenswrapper[4805]: E1216 11:56:04.778976 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lwjrh_openshift-ovn-kubernetes(cb7da1ad-f74d-471f-a98f-274cef7fe393)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.790602 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.803408 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.812684 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.821363 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.829609 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.839558 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.849782 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.860817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.860861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.860874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.860894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.860907 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:04Z","lastTransitionTime":"2025-12-16T11:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.861501 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.875458 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.894099 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1216 11:56:01.189577 6024 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1216 11:56:01.189654 6024 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1216 11:56:01.189693 6024 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1216 11:56:01.189715 6024 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1216 11:56:01.189732 6024 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1216 11:56:01.189723 6024 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1216 11:56:01.189799 6024 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1216 11:56:01.189800 6024 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1216 11:56:01.189824 6024 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1216 11:56:01.189830 6024 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1216 11:56:01.189860 6024 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1216 11:56:01.189876 6024 factory.go:656] Stopping watch factory\\\\nI1216 11:56:01.189910 6024 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1216 11:56:01.189953 6024 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:01.189884 6024 handler.go:208] Removed *v1.Node event handler 7\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:04Z\\\",\\\"message\\\":\\\"ns/node-resolver-btjs7 openshift-kube-apiserver/kube-apiserver-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g openshift-image-registry/node-ca-br4tl openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1216 11:56:04.036521 6235 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1216 11:56:04.036537 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036580 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036596 6235 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1216 11:56:04.036603 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1216 11:56:04.036614 6235 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036629 6235 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1216 11:56:04.036691 6235 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.908200 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.919756 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.931849 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.941878 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.953090 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.962493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.962532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.962543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.962558 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.962569 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:04Z","lastTransitionTime":"2025-12-16T11:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.969587 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:04 crc kubenswrapper[4805]: I1216 11:56:04.978935 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:04Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.065012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.065065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.065076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.065091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.065121 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:05Z","lastTransitionTime":"2025-12-16T11:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.167818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.167903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.167920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.167935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.167946 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:05Z","lastTransitionTime":"2025-12-16T11:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.270586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.270632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.270640 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.270655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.270667 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:05Z","lastTransitionTime":"2025-12-16T11:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.372904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.372937 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.372946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.372960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.372970 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:05Z","lastTransitionTime":"2025-12-16T11:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.475370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.475696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.475769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.475838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.475914 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:05Z","lastTransitionTime":"2025-12-16T11:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.521979 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:05 crc kubenswrapper[4805]: E1216 11:56:05.522117 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.522131 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:05 crc kubenswrapper[4805]: E1216 11:56:05.522281 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.578798 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.578837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.578846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.578875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.578885 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:05Z","lastTransitionTime":"2025-12-16T11:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.681088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.682891 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.682940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.682965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.682977 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:05Z","lastTransitionTime":"2025-12-16T11:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.786572 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovnkube-controller/1.log" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.786737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.786770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.786781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.786799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.786810 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:05Z","lastTransitionTime":"2025-12-16T11:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.889243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.889292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.889308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.889331 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.889345 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:05Z","lastTransitionTime":"2025-12-16T11:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.992389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.992465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.992488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.992516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:05 crc kubenswrapper[4805]: I1216 11:56:05.992535 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:05Z","lastTransitionTime":"2025-12-16T11:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.095361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.095404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.095416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.095432 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.095443 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:06Z","lastTransitionTime":"2025-12-16T11:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.197637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.197893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.198032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.198173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.198329 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:06Z","lastTransitionTime":"2025-12-16T11:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.300239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.300755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.300836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.300921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.300992 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:06Z","lastTransitionTime":"2025-12-16T11:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.404049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.404114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.404133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.404184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.404200 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:06Z","lastTransitionTime":"2025-12-16T11:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.506844 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.506886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.506896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.506911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.506922 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:06Z","lastTransitionTime":"2025-12-16T11:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.522572 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.522692 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:06 crc kubenswrapper[4805]: E1216 11:56:06.522777 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:06 crc kubenswrapper[4805]: E1216 11:56:06.523243 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.549273 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.561412 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.575947 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.588214 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.599476 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.609706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.609759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.609770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.609785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.609794 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:06Z","lastTransitionTime":"2025-12-16T11:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.618050 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:06 crc kubenswrapper[4805]: E1216 11:56:06.618599 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:06 crc kubenswrapper[4805]: E1216 11:56:06.619042 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs podName:2cc0dc93-5341-4b1e-840d-2c0c951ff142 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:10.618952589 +0000 UTC m=+44.337210414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs") pod "network-metrics-daemon-ct6d8" (UID: "2cc0dc93-5341-4b1e-840d-2c0c951ff142") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.630495 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.641899 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.652728 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.663180 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.687517 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.707960 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16b2f0cd3c90b3e237eba7ad0f82b1e2a073ccd117a2c5f8c5d6192a41b9b3f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1216 11:56:01.189577 6024 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1216 11:56:01.189654 6024 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1216 11:56:01.189693 6024 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1216 11:56:01.189715 6024 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1216 11:56:01.189732 6024 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1216 11:56:01.189723 6024 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1216 11:56:01.189799 6024 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1216 11:56:01.189800 6024 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1216 11:56:01.189824 6024 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1216 11:56:01.189830 6024 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1216 11:56:01.189860 6024 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1216 11:56:01.189876 6024 factory.go:656] Stopping watch factory\\\\nI1216 11:56:01.189910 6024 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1216 11:56:01.189953 6024 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:01.189884 6024 handler.go:208] Removed *v1.Node event handler 7\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:04Z\\\",\\\"message\\\":\\\"ns/node-resolver-btjs7 openshift-kube-apiserver/kube-apiserver-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g openshift-image-registry/node-ca-br4tl openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1216 11:56:04.036521 6235 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1216 11:56:04.036537 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036580 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036596 6235 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1216 11:56:04.036603 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1216 11:56:04.036614 6235 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036629 6235 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1216 11:56:04.036691 6235 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.715746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.715782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.715790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.715804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.715814 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:06Z","lastTransitionTime":"2025-12-16T11:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.722639 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.747693 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.759585 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.771369 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.781423 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.793494 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:06Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.818096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.818409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.818486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.818591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.818670 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:06Z","lastTransitionTime":"2025-12-16T11:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.920654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.920736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.920755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.920782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:06 crc kubenswrapper[4805]: I1216 11:56:06.920802 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:06Z","lastTransitionTime":"2025-12-16T11:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.023609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.023649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.023661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.023676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.023688 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.125586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.125616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.125628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.125644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.125658 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.227737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.227784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.227795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.227810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.227821 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.330018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.330064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.330076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.330092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.330102 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.432632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.432695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.432704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.432717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.432728 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.478197 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.478245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.478262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.478277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.478288 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: E1216 11:56:07.490904 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.493751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.493795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.493807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.493820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.493830 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: E1216 11:56:07.505569 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.508718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.508751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.508762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.508796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.508806 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.518423 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.519107 4805 scope.go:117] "RemoveContainer" containerID="e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783" Dec 16 11:56:07 crc kubenswrapper[4805]: E1216 11:56:07.519260 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lwjrh_openshift-ovn-kubernetes(cb7da1ad-f74d-471f-a98f-274cef7fe393)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" Dec 16 11:56:07 crc kubenswrapper[4805]: E1216 11:56:07.520786 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.522459 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.522469 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:07 crc kubenswrapper[4805]: E1216 11:56:07.522549 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:07 crc kubenswrapper[4805]: E1216 11:56:07.522694 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.523713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.523732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.523742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.523757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.523766 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: E1216 11:56:07.534710 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.537831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.537871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.537881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.537897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.537906 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.543653 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: E1216 11:56:07.550082 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: E1216 11:56:07.550261 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.551803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.551834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.551847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.551863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.551874 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.555167 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.564435 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.573845 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.585122 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.595576 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.607439 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.631488 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:04Z\\\",\\\"message\\\":\\\"ns/node-resolver-btjs7 openshift-kube-apiserver/kube-apiserver-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g openshift-image-registry/node-ca-br4tl openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1216 11:56:04.036521 6235 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1216 11:56:04.036537 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036580 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036596 6235 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1216 11:56:04.036603 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1216 11:56:04.036614 6235 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036629 6235 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1216 11:56:04.036691 6235 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lwjrh_openshift-ovn-kubernetes(cb7da1ad-f74d-471f-a98f-274cef7fe393)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.640955 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.652923 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.653804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.653836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.653845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.654294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.654348 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.666617 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.678037 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.688550 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.700691 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.711271 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.725036 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.736296 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:07Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.757158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.757203 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.757215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.757231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.757243 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.860455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.860505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.860518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.860536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.860550 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.963039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.963094 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.963105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.963121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:07 crc kubenswrapper[4805]: I1216 11:56:07.963133 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:07Z","lastTransitionTime":"2025-12-16T11:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.065283 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.065325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.065336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.065353 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.065370 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:08Z","lastTransitionTime":"2025-12-16T11:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.167840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.167883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.167893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.167909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.167921 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:08Z","lastTransitionTime":"2025-12-16T11:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.270391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.270440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.270452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.270467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.270478 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:08Z","lastTransitionTime":"2025-12-16T11:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.373419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.373483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.373509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.373534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.373550 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:08Z","lastTransitionTime":"2025-12-16T11:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.476221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.476257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.476266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.476279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.476289 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:08Z","lastTransitionTime":"2025-12-16T11:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.521672 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:08 crc kubenswrapper[4805]: E1216 11:56:08.521810 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.521986 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:08 crc kubenswrapper[4805]: E1216 11:56:08.522174 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.578880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.578933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.578948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.578966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.578977 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:08Z","lastTransitionTime":"2025-12-16T11:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.681788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.681838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.681851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.681870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.681883 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:08Z","lastTransitionTime":"2025-12-16T11:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.784525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.784573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.784587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.784603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.784613 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:08Z","lastTransitionTime":"2025-12-16T11:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.886976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.887046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.887058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.887075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.887086 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:08Z","lastTransitionTime":"2025-12-16T11:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.990310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.990352 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.990363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.990380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:08 crc kubenswrapper[4805]: I1216 11:56:08.990390 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:08Z","lastTransitionTime":"2025-12-16T11:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.092577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.092611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.092619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.092631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.092640 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:09Z","lastTransitionTime":"2025-12-16T11:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.194601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.194813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.194870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.194932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.194987 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:09Z","lastTransitionTime":"2025-12-16T11:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.298127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.298400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.298486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.298578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.298657 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:09Z","lastTransitionTime":"2025-12-16T11:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.401082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.401587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.401651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.401716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.401774 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:09Z","lastTransitionTime":"2025-12-16T11:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.504353 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.504662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.504711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.504728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.504737 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:09Z","lastTransitionTime":"2025-12-16T11:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.522516 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:09 crc kubenswrapper[4805]: E1216 11:56:09.522620 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.522521 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:09 crc kubenswrapper[4805]: E1216 11:56:09.522865 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.606575 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.606863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.606942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.607053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.607183 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:09Z","lastTransitionTime":"2025-12-16T11:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.709231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.709262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.709271 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.709284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.709293 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:09Z","lastTransitionTime":"2025-12-16T11:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.811618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.811687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.811739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.811762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.811779 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:09Z","lastTransitionTime":"2025-12-16T11:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.913347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.913395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.913403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.913417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:09 crc kubenswrapper[4805]: I1216 11:56:09.913426 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:09Z","lastTransitionTime":"2025-12-16T11:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.016218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.016444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.016505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.016616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.016701 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:10Z","lastTransitionTime":"2025-12-16T11:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.119622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.119655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.119663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.119676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.119685 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:10Z","lastTransitionTime":"2025-12-16T11:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.221874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.221917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.221927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.221945 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.221956 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:10Z","lastTransitionTime":"2025-12-16T11:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.324093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.324123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.324131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.324158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.324171 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:10Z","lastTransitionTime":"2025-12-16T11:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.426848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.426883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.426892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.426906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.426915 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:10Z","lastTransitionTime":"2025-12-16T11:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.522629 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.522670 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:10 crc kubenswrapper[4805]: E1216 11:56:10.522754 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:10 crc kubenswrapper[4805]: E1216 11:56:10.522836 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.528794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.528838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.528850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.528864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.528872 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:10Z","lastTransitionTime":"2025-12-16T11:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.631273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.631321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.631330 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.631349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.631359 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:10Z","lastTransitionTime":"2025-12-16T11:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.663429 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:10 crc kubenswrapper[4805]: E1216 11:56:10.663593 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:10 crc kubenswrapper[4805]: E1216 11:56:10.663829 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs podName:2cc0dc93-5341-4b1e-840d-2c0c951ff142 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:18.663810767 +0000 UTC m=+52.382068572 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs") pod "network-metrics-daemon-ct6d8" (UID: "2cc0dc93-5341-4b1e-840d-2c0c951ff142") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.733511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.733787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.733919 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.734002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.734202 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:10Z","lastTransitionTime":"2025-12-16T11:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.836826 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.837094 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.837274 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.837372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.837450 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:10Z","lastTransitionTime":"2025-12-16T11:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.940997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.941032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.941042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.941058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:10 crc kubenswrapper[4805]: I1216 11:56:10.941070 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:10Z","lastTransitionTime":"2025-12-16T11:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.043859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.043910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.043920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.043933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.043941 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:11Z","lastTransitionTime":"2025-12-16T11:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.146106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.146160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.146169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.146182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.146191 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:11Z","lastTransitionTime":"2025-12-16T11:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.248981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.249054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.249070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.249093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.249108 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:11Z","lastTransitionTime":"2025-12-16T11:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.351719 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.351776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.351787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.351803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.351814 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:11Z","lastTransitionTime":"2025-12-16T11:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.454667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.454698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.454707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.454719 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.454729 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:11Z","lastTransitionTime":"2025-12-16T11:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.522158 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:11 crc kubenswrapper[4805]: E1216 11:56:11.522281 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.522578 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:11 crc kubenswrapper[4805]: E1216 11:56:11.522629 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.556525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.556563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.556574 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.556590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.556601 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:11Z","lastTransitionTime":"2025-12-16T11:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.659257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.659300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.659311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.659326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.659336 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:11Z","lastTransitionTime":"2025-12-16T11:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.761056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.761133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.761174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.761195 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.761209 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:11Z","lastTransitionTime":"2025-12-16T11:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.864897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.864937 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.864948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.864963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.864979 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:11Z","lastTransitionTime":"2025-12-16T11:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.967859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.968092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.968230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.968297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:11 crc kubenswrapper[4805]: I1216 11:56:11.968377 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:11Z","lastTransitionTime":"2025-12-16T11:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.071191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.071228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.071239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.071254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.071263 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:12Z","lastTransitionTime":"2025-12-16T11:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.173705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.173769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.173780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.173816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.173827 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:12Z","lastTransitionTime":"2025-12-16T11:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.276727 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.276979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.277180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.277323 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.277452 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:12Z","lastTransitionTime":"2025-12-16T11:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.380097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.380165 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.380176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.380192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.380208 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:12Z","lastTransitionTime":"2025-12-16T11:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.482420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.482481 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.482492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.482507 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.482517 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:12Z","lastTransitionTime":"2025-12-16T11:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.521708 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.521715 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:12 crc kubenswrapper[4805]: E1216 11:56:12.521948 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:12 crc kubenswrapper[4805]: E1216 11:56:12.522000 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.584853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.584912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.584923 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.584940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.584952 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:12Z","lastTransitionTime":"2025-12-16T11:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.687894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.687968 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.687990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.688016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.688036 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:12Z","lastTransitionTime":"2025-12-16T11:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.790988 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.791020 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.791029 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.791044 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.791052 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:12Z","lastTransitionTime":"2025-12-16T11:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.892955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.892987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.893009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.893037 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.893047 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:12Z","lastTransitionTime":"2025-12-16T11:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.995981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.996028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.996036 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.996050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:12 crc kubenswrapper[4805]: I1216 11:56:12.996059 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:12Z","lastTransitionTime":"2025-12-16T11:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.098234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.098284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.098301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.098326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.098341 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:13Z","lastTransitionTime":"2025-12-16T11:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.200459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.200716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.200799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.200883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.200997 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:13Z","lastTransitionTime":"2025-12-16T11:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.304216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.304256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.304270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.304286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.304297 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:13Z","lastTransitionTime":"2025-12-16T11:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.406322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.406563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.406702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.406808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.406922 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:13Z","lastTransitionTime":"2025-12-16T11:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.509120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.509212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.509225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.509244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.509257 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:13Z","lastTransitionTime":"2025-12-16T11:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.522508 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.522601 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:13 crc kubenswrapper[4805]: E1216 11:56:13.522678 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:13 crc kubenswrapper[4805]: E1216 11:56:13.522842 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.611747 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.611812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.611835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.611865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.611888 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:13Z","lastTransitionTime":"2025-12-16T11:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.713907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.713961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.713978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.714056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.714092 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:13Z","lastTransitionTime":"2025-12-16T11:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.816161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.816543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.816693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.816795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.816879 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:13Z","lastTransitionTime":"2025-12-16T11:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.919290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.919332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.919344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.919361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:13 crc kubenswrapper[4805]: I1216 11:56:13.919375 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:13Z","lastTransitionTime":"2025-12-16T11:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.021623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.021883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.022021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.022131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.022250 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:14Z","lastTransitionTime":"2025-12-16T11:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.124884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.124932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.124941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.124955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.124965 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:14Z","lastTransitionTime":"2025-12-16T11:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.226984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.227373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.227465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.227578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.227657 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:14Z","lastTransitionTime":"2025-12-16T11:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.329629 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.329673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.329683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.329699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.329710 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:14Z","lastTransitionTime":"2025-12-16T11:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.432054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.432096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.432107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.432124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.432136 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:14Z","lastTransitionTime":"2025-12-16T11:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.522260 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.522382 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:14 crc kubenswrapper[4805]: E1216 11:56:14.522386 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:14 crc kubenswrapper[4805]: E1216 11:56:14.522465 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.534952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.534991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.535003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.535019 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.535031 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:14Z","lastTransitionTime":"2025-12-16T11:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.637289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.637334 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.637347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.637369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.637388 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:14Z","lastTransitionTime":"2025-12-16T11:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.739823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.740083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.740184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.740303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.740395 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:14Z","lastTransitionTime":"2025-12-16T11:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.842716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.842760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.842772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.842789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.842801 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:14Z","lastTransitionTime":"2025-12-16T11:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.946570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.946609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.946619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.946633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:14 crc kubenswrapper[4805]: I1216 11:56:14.946642 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:14Z","lastTransitionTime":"2025-12-16T11:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.033280 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.045489 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.045714 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.049305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.049339 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.049347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.049362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.049373 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:15Z","lastTransitionTime":"2025-12-16T11:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.059963 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.073879 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.086507 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.096658 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.114796 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.126401 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.138781 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.150078 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.151741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.151874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.151939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.152011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.152291 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:15Z","lastTransitionTime":"2025-12-16T11:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.164492 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.178312 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.195476 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.218893 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:04Z\\\",\\\"message\\\":\\\"ns/node-resolver-btjs7 openshift-kube-apiserver/kube-apiserver-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g openshift-image-registry/node-ca-br4tl openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1216 11:56:04.036521 6235 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1216 11:56:04.036537 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036580 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036596 6235 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1216 11:56:04.036603 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1216 11:56:04.036614 6235 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036629 6235 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1216 11:56:04.036691 6235 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lwjrh_openshift-ovn-kubernetes(cb7da1ad-f74d-471f-a98f-274cef7fe393)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.230350 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.242399 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.254648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.254688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.254698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.254712 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.254721 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:15Z","lastTransitionTime":"2025-12-16T11:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.255182 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.268535 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:15Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.356707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.357028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.357133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.357275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.357371 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:15Z","lastTransitionTime":"2025-12-16T11:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.459832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.459871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.459883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.459898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.459908 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:15Z","lastTransitionTime":"2025-12-16T11:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.510439 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.510540 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.510577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.510603 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.510677 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:56:47.510639009 +0000 UTC m=+81.228896814 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.510698 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.510721 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.510743 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.510763 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.510767 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:47.510751642 +0000 UTC m=+81.229009517 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.510802 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:47.510795474 +0000 UTC m=+81.229053279 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.510845 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.510959 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.510967 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.510974 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.510994 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:47.510987739 +0000 UTC m=+81.229245544 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.511032 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.511052 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:47.511045311 +0000 UTC m=+81.229303226 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.521864 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.521880 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.521967 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:15 crc kubenswrapper[4805]: E1216 11:56:15.522096 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.561619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.561647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.561655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.561667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.561676 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:15Z","lastTransitionTime":"2025-12-16T11:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.664472 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.664507 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.664518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.664532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.664543 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:15Z","lastTransitionTime":"2025-12-16T11:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.766791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.767015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.767100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.767206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.767282 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:15Z","lastTransitionTime":"2025-12-16T11:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.869448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.869484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.869541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.869559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.869570 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:15Z","lastTransitionTime":"2025-12-16T11:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.971484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.971525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.971536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.971571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:15 crc kubenswrapper[4805]: I1216 11:56:15.971584 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:15Z","lastTransitionTime":"2025-12-16T11:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.074248 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.074291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.074303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.074319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.074330 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:16Z","lastTransitionTime":"2025-12-16T11:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.177023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.177057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.177068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.177084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.177095 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:16Z","lastTransitionTime":"2025-12-16T11:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.279526 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.279803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.279866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.279931 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.280023 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:16Z","lastTransitionTime":"2025-12-16T11:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.382305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.382348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.382356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.382370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.382379 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:16Z","lastTransitionTime":"2025-12-16T11:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.484299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.484336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.484347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.484361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.484371 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:16Z","lastTransitionTime":"2025-12-16T11:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.521954 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.521995 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:16 crc kubenswrapper[4805]: E1216 11:56:16.522083 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:16 crc kubenswrapper[4805]: E1216 11:56:16.522217 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.537302 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.549029 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.558713 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.569726 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.585614 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:04Z\\\",\\\"message\\\":\\\"ns/node-resolver-btjs7 openshift-kube-apiserver/kube-apiserver-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g openshift-image-registry/node-ca-br4tl openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1216 11:56:04.036521 6235 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1216 11:56:04.036537 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036580 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036596 6235 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1216 11:56:04.036603 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1216 11:56:04.036614 6235 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036629 6235 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1216 11:56:04.036691 6235 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lwjrh_openshift-ovn-kubernetes(cb7da1ad-f74d-471f-a98f-274cef7fe393)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.586620 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.586786 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.586808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.586830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.586846 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:16Z","lastTransitionTime":"2025-12-16T11:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.596609 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.609045 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.621510 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.633427 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.642656 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.654188 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.663791 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd7db4d-e488-4922-a120-7bf5242ef90f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f08fe672a7cdd5082cea8ed2cacc5c71754493f2470a7a81fa6baff03eee5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ed200ae81bc9ecf0225a69d59eb4385bcba5823c87abc590d781340c60490f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e366188c71f88353931b92e19bc6a1c2b58eb3a55bf872be24918600a1f38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.681902 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.689164 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.689192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.689201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.689216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.689226 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:16Z","lastTransitionTime":"2025-12-16T11:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.691542 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.701898 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.714479 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.724779 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.736086 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:16Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.790589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.790616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.790625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.790639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.790648 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:16Z","lastTransitionTime":"2025-12-16T11:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.892956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.892979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.892987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.893000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.893008 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:16Z","lastTransitionTime":"2025-12-16T11:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.995279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.995316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.995326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.995341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:16 crc kubenswrapper[4805]: I1216 11:56:16.995352 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:16Z","lastTransitionTime":"2025-12-16T11:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.097368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.097649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.097729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.097798 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.097875 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.199791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.200084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.200226 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.200345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.200450 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.302957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.302993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.303002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.303018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.303029 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.405663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.405706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.405719 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.405738 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.405750 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.508339 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.508374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.508384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.508401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.508412 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.522601 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:17 crc kubenswrapper[4805]: E1216 11:56:17.522892 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.522608 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:17 crc kubenswrapper[4805]: E1216 11:56:17.523102 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.610069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.610103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.610111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.610124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.610135 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.712022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.712058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.712067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.712081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.712091 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.813846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.814099 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.814185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.814276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.814358 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.851660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.851701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.851711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.851725 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.851735 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: E1216 11:56:17.868001 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:17Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.872807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.873055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.873198 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.873375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.873496 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: E1216 11:56:17.885628 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:17Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.889059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.889098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.889114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.889132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.889160 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: E1216 11:56:17.900789 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:17Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.904195 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.904230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.904238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.904252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.904261 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: E1216 11:56:17.914708 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:17Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.917549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.917583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.917592 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.917609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.917619 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:17 crc kubenswrapper[4805]: E1216 11:56:17.927119 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:17Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:17 crc kubenswrapper[4805]: E1216 11:56:17.927281 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.928557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.928589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.928600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.928616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:17 crc kubenswrapper[4805]: I1216 11:56:17.928627 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:17Z","lastTransitionTime":"2025-12-16T11:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.031023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.031063 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.031072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.031086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.031094 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:18Z","lastTransitionTime":"2025-12-16T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.133924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.133961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.133973 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.133992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.134003 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:18Z","lastTransitionTime":"2025-12-16T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.236412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.236661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.236754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.236841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.236947 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:18Z","lastTransitionTime":"2025-12-16T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.340172 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.340220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.340234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.340252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.340267 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:18Z","lastTransitionTime":"2025-12-16T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.443067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.443327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.443395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.443487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.443549 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:18Z","lastTransitionTime":"2025-12-16T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.522592 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:18 crc kubenswrapper[4805]: E1216 11:56:18.522799 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.523293 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:18 crc kubenswrapper[4805]: E1216 11:56:18.523468 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.545668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.545704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.545712 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.545727 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.545736 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:18Z","lastTransitionTime":"2025-12-16T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.648012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.648041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.648049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.648062 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.648071 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:18Z","lastTransitionTime":"2025-12-16T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.744755 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:18 crc kubenswrapper[4805]: E1216 11:56:18.744888 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:18 crc kubenswrapper[4805]: E1216 11:56:18.744947 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs podName:2cc0dc93-5341-4b1e-840d-2c0c951ff142 nodeName:}" failed. No retries permitted until 2025-12-16 11:56:34.744929407 +0000 UTC m=+68.463187212 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs") pod "network-metrics-daemon-ct6d8" (UID: "2cc0dc93-5341-4b1e-840d-2c0c951ff142") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.750057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.750100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.750115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.750161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.750177 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:18Z","lastTransitionTime":"2025-12-16T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.852525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.852581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.852600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.852623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.852641 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:18Z","lastTransitionTime":"2025-12-16T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.955021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.955056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.955065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.955078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:18 crc kubenswrapper[4805]: I1216 11:56:18.955086 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:18Z","lastTransitionTime":"2025-12-16T11:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.057974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.058015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.058023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.058039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.058048 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:19Z","lastTransitionTime":"2025-12-16T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.160691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.160743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.160756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.160777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.160789 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:19Z","lastTransitionTime":"2025-12-16T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.263232 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.263300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.263313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.263330 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.263343 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:19Z","lastTransitionTime":"2025-12-16T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.366170 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.366216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.366225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.366242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.366255 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:19Z","lastTransitionTime":"2025-12-16T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.473203 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.473253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.473266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.473284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.473295 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:19Z","lastTransitionTime":"2025-12-16T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.521762 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.521847 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:19 crc kubenswrapper[4805]: E1216 11:56:19.522219 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:19 crc kubenswrapper[4805]: E1216 11:56:19.522315 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.578386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.578423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.578466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.578480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.578490 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:19Z","lastTransitionTime":"2025-12-16T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.681455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.681499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.681511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.681528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.681537 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:19Z","lastTransitionTime":"2025-12-16T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.783473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.783734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.783743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.783755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.783764 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:19Z","lastTransitionTime":"2025-12-16T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.887483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.887531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.887543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.887562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.887574 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:19Z","lastTransitionTime":"2025-12-16T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.990311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.990349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.990359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.990373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:19 crc kubenswrapper[4805]: I1216 11:56:19.990381 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:19Z","lastTransitionTime":"2025-12-16T11:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.093132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.093177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.093187 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.093199 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.093208 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:20Z","lastTransitionTime":"2025-12-16T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.194951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.194999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.195009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.195027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.195043 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:20Z","lastTransitionTime":"2025-12-16T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.296909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.296951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.296960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.296975 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.296986 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:20Z","lastTransitionTime":"2025-12-16T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.399607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.399675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.399691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.399713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.399728 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:20Z","lastTransitionTime":"2025-12-16T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.502290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.502328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.502345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.502361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.502371 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:20Z","lastTransitionTime":"2025-12-16T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.521838 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.521975 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:20 crc kubenswrapper[4805]: E1216 11:56:20.522185 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:20 crc kubenswrapper[4805]: E1216 11:56:20.522332 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.604285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.604322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.604336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.604351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.604365 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:20Z","lastTransitionTime":"2025-12-16T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.706932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.706977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.706987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.707001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.707011 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:20Z","lastTransitionTime":"2025-12-16T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.809330 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.809378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.809387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.809403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.809413 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:20Z","lastTransitionTime":"2025-12-16T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.911282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.911314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.911323 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.911335 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:20 crc kubenswrapper[4805]: I1216 11:56:20.911363 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:20Z","lastTransitionTime":"2025-12-16T11:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.014570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.014632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.014649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.014677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.014695 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:21Z","lastTransitionTime":"2025-12-16T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.117658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.117697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.117732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.117750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.117765 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:21Z","lastTransitionTime":"2025-12-16T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.220331 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.220382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.220391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.220407 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.220417 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:21Z","lastTransitionTime":"2025-12-16T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.322721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.322761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.322773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.322789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.322799 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:21Z","lastTransitionTime":"2025-12-16T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.425426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.425498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.425518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.425540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.425557 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:21Z","lastTransitionTime":"2025-12-16T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.522104 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.522131 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:21 crc kubenswrapper[4805]: E1216 11:56:21.522266 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:21 crc kubenswrapper[4805]: E1216 11:56:21.522314 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.522857 4805 scope.go:117] "RemoveContainer" containerID="e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.527665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.527718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.527731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.527748 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.527760 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:21Z","lastTransitionTime":"2025-12-16T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.631247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.631316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.631330 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.631357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.631374 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:21Z","lastTransitionTime":"2025-12-16T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.733800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.733847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.733857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.733876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.733887 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:21Z","lastTransitionTime":"2025-12-16T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.836879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.836924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.836940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.836957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.836969 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:21Z","lastTransitionTime":"2025-12-16T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.841888 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovnkube-controller/1.log" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.845062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.845491 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.865469 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:21Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.882397 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:21Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.896222 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:21Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.910594 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:21Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.926811 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:21Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.940065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.940113 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.940125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.940164 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.940178 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:21Z","lastTransitionTime":"2025-12-16T11:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.942409 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd7db4d-e488-4922-a120-7bf5242ef90f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f08fe672a7cdd5082cea8ed2cacc5c71754493f2470a7a81fa6baff03eee5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ed200ae81bc9ecf0225a69d59eb4385bcba5823c87abc590d781340c60490f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e366188c71f88353931b92e19bc6a1c2b58eb3a55bf872be24918600a1f38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:21Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.964034 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:21Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:21 crc kubenswrapper[4805]: I1216 11:56:21.980585 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:21Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.000110 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:21Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.015686 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:22Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.032741 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:22Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.042710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.042745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.042759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.042774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.042786 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:22Z","lastTransitionTime":"2025-12-16T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.051966 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:22Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.072346 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:22Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.134908 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:22Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.146003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.146086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.146101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.146118 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.146517 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:22Z","lastTransitionTime":"2025-12-16T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.160467 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:22Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.179578 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:22Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.203675 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:04Z\\\",\\\"message\\\":\\\"ns/node-resolver-btjs7 openshift-kube-apiserver/kube-apiserver-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g openshift-image-registry/node-ca-br4tl openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1216 11:56:04.036521 6235 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1216 11:56:04.036537 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036580 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036596 6235 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1216 11:56:04.036603 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1216 11:56:04.036614 6235 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036629 6235 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1216 11:56:04.036691 6235 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:22Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.218024 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:22Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.249114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.249163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.249206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.249221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.249231 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:22Z","lastTransitionTime":"2025-12-16T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.351195 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.351249 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.351259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.351281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.351292 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:22Z","lastTransitionTime":"2025-12-16T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.453714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.453752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.453761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.453775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.453783 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:22Z","lastTransitionTime":"2025-12-16T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.522618 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.522669 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:22 crc kubenswrapper[4805]: E1216 11:56:22.522749 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:22 crc kubenswrapper[4805]: E1216 11:56:22.522827 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.555992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.556040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.556052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.556067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.556079 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:22Z","lastTransitionTime":"2025-12-16T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.659231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.659294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.659305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.659324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.659337 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:22Z","lastTransitionTime":"2025-12-16T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.761739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.761778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.761787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.761802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.761811 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:22Z","lastTransitionTime":"2025-12-16T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.864647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.864681 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.864691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.864704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.864714 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:22Z","lastTransitionTime":"2025-12-16T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.967064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.967108 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.967120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.967135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:22 crc kubenswrapper[4805]: I1216 11:56:22.967161 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:22Z","lastTransitionTime":"2025-12-16T11:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.069379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.069412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.069423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.069438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.069450 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:23Z","lastTransitionTime":"2025-12-16T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.172070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.172113 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.172124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.172163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.172177 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:23Z","lastTransitionTime":"2025-12-16T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.274287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.274348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.274363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.274381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.274392 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:23Z","lastTransitionTime":"2025-12-16T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.376743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.376785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.376794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.376809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.376819 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:23Z","lastTransitionTime":"2025-12-16T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.478797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.478841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.478853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.478870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.478882 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:23Z","lastTransitionTime":"2025-12-16T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.522107 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.522122 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:23 crc kubenswrapper[4805]: E1216 11:56:23.522315 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:23 crc kubenswrapper[4805]: E1216 11:56:23.522406 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.586242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.586281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.586292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.586309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.586327 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:23Z","lastTransitionTime":"2025-12-16T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.690193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.690311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.690333 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.690358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.690380 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:23Z","lastTransitionTime":"2025-12-16T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.793564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.793621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.793639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.793661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.793673 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:23Z","lastTransitionTime":"2025-12-16T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.852892 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovnkube-controller/2.log" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.853781 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovnkube-controller/1.log" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.857735 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57" exitCode=1 Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.857782 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57"} Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.857821 4805 scope.go:117] "RemoveContainer" containerID="e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.858629 4805 scope.go:117] "RemoveContainer" containerID="9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57" Dec 16 11:56:23 crc kubenswrapper[4805]: E1216 11:56:23.858840 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lwjrh_openshift-ovn-kubernetes(cb7da1ad-f74d-471f-a98f-274cef7fe393)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.872796 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:23Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.883392 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:23Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.892902 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:23Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.901433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.901477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.901488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.901503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.901514 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:23Z","lastTransitionTime":"2025-12-16T11:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.905052 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:23Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.917221 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:23Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.935687 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:23Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.947133 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:23Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.957777 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd7db4d-e488-4922-a120-7bf5242ef90f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f08fe672a7cdd5082cea8ed2cacc5c71754493f2470a7a81fa6baff03eee5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ed200ae81bc9ecf0225a69d59eb4385bcba5823c87abc590d781340c60490f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e366188c71f88353931b92e19bc6a1c2b58eb3a55bf872be24918600a1f38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:23Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.972035 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:23Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.984421 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:23Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:23 crc kubenswrapper[4805]: I1216 11:56:23.994805 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:23Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.003795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.003830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.003839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.003852 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.003861 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:24Z","lastTransitionTime":"2025-12-16T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.005415 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:24Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.016577 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:24Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.027687 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:24Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.041421 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:24Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.064408 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:04Z\\\",\\\"message\\\":\\\"ns/node-resolver-btjs7 openshift-kube-apiserver/kube-apiserver-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g openshift-image-registry/node-ca-br4tl openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1216 11:56:04.036521 6235 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1216 11:56:04.036537 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036580 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036596 6235 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1216 11:56:04.036603 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1216 11:56:04.036614 6235 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036629 6235 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1216 11:56:04.036691 6235 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:22Z\\\",\\\"message\\\":\\\"Set:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543106 6416 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543067 6416 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:22.543176 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1216 11:56:22.543130 6416 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1216 11:56:22.543258 6416 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:24Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.080636 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:24Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.094952 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:24Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.106800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.106862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.106876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.106892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.106905 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:24Z","lastTransitionTime":"2025-12-16T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.209342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.209390 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.209409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.209427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.209439 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:24Z","lastTransitionTime":"2025-12-16T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.311408 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.311449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.311463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.311482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.311494 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:24Z","lastTransitionTime":"2025-12-16T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.413732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.413770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.413779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.413793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.413805 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:24Z","lastTransitionTime":"2025-12-16T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.516553 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.516594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.516604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.516622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.516631 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:24Z","lastTransitionTime":"2025-12-16T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.522072 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.522079 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:24 crc kubenswrapper[4805]: E1216 11:56:24.522264 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:24 crc kubenswrapper[4805]: E1216 11:56:24.522367 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.619452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.619508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.619526 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.619545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.619558 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:24Z","lastTransitionTime":"2025-12-16T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.722229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.722293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.722307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.722324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.722336 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:24Z","lastTransitionTime":"2025-12-16T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.825161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.825198 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.825210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.825226 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.825236 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:24Z","lastTransitionTime":"2025-12-16T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.862017 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovnkube-controller/2.log" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.927763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.927814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.927829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.927849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:24 crc kubenswrapper[4805]: I1216 11:56:24.927865 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:24Z","lastTransitionTime":"2025-12-16T11:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.035095 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.035161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.035173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.035191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.035206 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:25Z","lastTransitionTime":"2025-12-16T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.137708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.137753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.137765 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.137781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.137792 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:25Z","lastTransitionTime":"2025-12-16T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.240314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.240351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.240359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.240372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.240382 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:25Z","lastTransitionTime":"2025-12-16T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.342626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.342668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.342676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.342691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.342701 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:25Z","lastTransitionTime":"2025-12-16T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.446390 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.446477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.446513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.446544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.446567 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:25Z","lastTransitionTime":"2025-12-16T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.522433 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.522534 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:25 crc kubenswrapper[4805]: E1216 11:56:25.522570 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:25 crc kubenswrapper[4805]: E1216 11:56:25.522648 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.549301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.549349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.549358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.549371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.549380 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:25Z","lastTransitionTime":"2025-12-16T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.651201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.651241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.651254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.651270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.651297 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:25Z","lastTransitionTime":"2025-12-16T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.753694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.753741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.753750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.753762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.753771 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:25Z","lastTransitionTime":"2025-12-16T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.856399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.856438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.856447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.856460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.856470 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:25Z","lastTransitionTime":"2025-12-16T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.958833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.958865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.958873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.958886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:25 crc kubenswrapper[4805]: I1216 11:56:25.958895 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:25Z","lastTransitionTime":"2025-12-16T11:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.061637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.061676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.061687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.061704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.061716 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:26Z","lastTransitionTime":"2025-12-16T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.164655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.164711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.164734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.164762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.164784 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:26Z","lastTransitionTime":"2025-12-16T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.267782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.267820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.267834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.267855 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.267872 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:26Z","lastTransitionTime":"2025-12-16T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.370280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.370336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.370351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.370370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.370386 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:26Z","lastTransitionTime":"2025-12-16T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.472400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.472438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.472449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.472466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.472477 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:26Z","lastTransitionTime":"2025-12-16T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.522019 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:26 crc kubenswrapper[4805]: E1216 11:56:26.522176 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.522281 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:26 crc kubenswrapper[4805]: E1216 11:56:26.522355 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.547470 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:04Z\\\",\\\"message\\\":\\\"ns/node-resolver-btjs7 openshift-kube-apiserver/kube-apiserver-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g openshift-image-registry/node-ca-br4tl openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1216 11:56:04.036521 6235 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1216 11:56:04.036537 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036580 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036596 6235 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1216 11:56:04.036603 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1216 11:56:04.036614 6235 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036629 6235 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1216 11:56:04.036691 6235 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:22Z\\\",\\\"message\\\":\\\"Set:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543106 6416 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543067 6416 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:22.543176 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1216 11:56:22.543130 6416 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1216 11:56:22.543258 6416 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.557801 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.569794 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.575515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.575701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.575774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.575899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.575985 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:26Z","lastTransitionTime":"2025-12-16T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.585512 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.599621 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.614069 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.627443 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.640278 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.652930 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.665902 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.677801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.677849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.677864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.677882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.677894 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:26Z","lastTransitionTime":"2025-12-16T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.678629 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.694587 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd7db4d-e488-4922-a120-7bf5242ef90f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f08fe672a7cdd5082cea8ed2cacc5c71754493f2470a7a81fa6baff03eee5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ed200ae81bc9ecf0225a69d59eb4385bcba5823c87abc590d781340c60490f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e366188c71f88353931b92e19bc6a1c2b58eb3a55bf872be24918600a1f38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.713585 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.727215 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.740186 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.753406 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.766517 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.775765 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:26Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.781287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.781319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.781327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.781341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.781352 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:26Z","lastTransitionTime":"2025-12-16T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.884631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.884663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.884671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.884684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.884695 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:26Z","lastTransitionTime":"2025-12-16T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.987995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.988697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.988716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.988750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:26 crc kubenswrapper[4805]: I1216 11:56:26.988763 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:26Z","lastTransitionTime":"2025-12-16T11:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.090422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.090469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.090479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.090496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.090506 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:27Z","lastTransitionTime":"2025-12-16T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.193187 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.193245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.193259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.193273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.193283 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:27Z","lastTransitionTime":"2025-12-16T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.296065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.296122 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.296163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.296186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.296205 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:27Z","lastTransitionTime":"2025-12-16T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.399425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.399655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.399758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.399898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.400049 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:27Z","lastTransitionTime":"2025-12-16T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.502303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.502346 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.502357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.502377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.502392 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:27Z","lastTransitionTime":"2025-12-16T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.522672 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:27 crc kubenswrapper[4805]: E1216 11:56:27.522859 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.523079 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:27 crc kubenswrapper[4805]: E1216 11:56:27.523316 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.604678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.604949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.605040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.605130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.605254 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:27Z","lastTransitionTime":"2025-12-16T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.707692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.708001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.708308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.708402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.711891 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:27Z","lastTransitionTime":"2025-12-16T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.815610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.815662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.815672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.815686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.815694 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:27Z","lastTransitionTime":"2025-12-16T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.918117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.918718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.918792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.919116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:27 crc kubenswrapper[4805]: I1216 11:56:27.919402 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:27Z","lastTransitionTime":"2025-12-16T11:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.022442 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.022533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.022548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.022571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.022588 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.125819 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.125875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.125892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.125914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.125931 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.127700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.127738 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.127753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.127769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.127783 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: E1216 11:56:28.142770 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:28Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.147265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.147324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.147340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.147360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.147720 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: E1216 11:56:28.161534 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:28Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.164958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.164999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.165046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.165069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.165081 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: E1216 11:56:28.177165 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:28Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.180986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.181016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.181026 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.181042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.181053 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: E1216 11:56:28.195028 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:28Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.198654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.198682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.198693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.198710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.198725 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: E1216 11:56:28.211015 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:28Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:28 crc kubenswrapper[4805]: E1216 11:56:28.211160 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.227907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.227931 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.227940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.227952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.227960 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.330513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.330544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.330553 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.330566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.330577 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.433350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.433389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.433398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.433413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.433422 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.521860 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:28 crc kubenswrapper[4805]: E1216 11:56:28.521998 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.522288 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:28 crc kubenswrapper[4805]: E1216 11:56:28.522386 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.536210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.536253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.536265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.536283 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.536295 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.638121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.638209 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.638227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.638247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.638258 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.740780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.740827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.740842 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.740863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.740873 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.842770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.842806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.842817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.842833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.842843 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.945450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.945494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.945505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.945522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:28 crc kubenswrapper[4805]: I1216 11:56:28.945534 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:28Z","lastTransitionTime":"2025-12-16T11:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.047419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.047446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.047457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.047471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.047480 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:29Z","lastTransitionTime":"2025-12-16T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.150836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.150893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.150902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.150918 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.150927 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:29Z","lastTransitionTime":"2025-12-16T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.253642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.253674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.253683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.253696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.253707 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:29Z","lastTransitionTime":"2025-12-16T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.356073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.356128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.356167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.356196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.356205 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:29Z","lastTransitionTime":"2025-12-16T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.458862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.458903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.458912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.458933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.458944 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:29Z","lastTransitionTime":"2025-12-16T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.521824 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.521888 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:29 crc kubenswrapper[4805]: E1216 11:56:29.521973 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:29 crc kubenswrapper[4805]: E1216 11:56:29.522090 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.562276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.562321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.562332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.562351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.562363 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:29Z","lastTransitionTime":"2025-12-16T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.664375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.664406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.664416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.664428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.664438 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:29Z","lastTransitionTime":"2025-12-16T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.767101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.767162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.767175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.767190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.767203 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:29Z","lastTransitionTime":"2025-12-16T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.869573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.869658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.869672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.869688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.869701 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:29Z","lastTransitionTime":"2025-12-16T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.971270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.971311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.971330 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.971344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:29 crc kubenswrapper[4805]: I1216 11:56:29.971355 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:29Z","lastTransitionTime":"2025-12-16T11:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.073861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.073998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.074428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.074523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.074795 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:30Z","lastTransitionTime":"2025-12-16T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.177504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.177556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.177566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.177585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.177598 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:30Z","lastTransitionTime":"2025-12-16T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.280183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.280207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.280215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.280227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.280236 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:30Z","lastTransitionTime":"2025-12-16T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.382208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.382254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.382269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.382287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.382297 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:30Z","lastTransitionTime":"2025-12-16T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.485705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.486088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.486175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.486277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.486366 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:30Z","lastTransitionTime":"2025-12-16T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.522634 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.522755 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:30 crc kubenswrapper[4805]: E1216 11:56:30.522828 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:30 crc kubenswrapper[4805]: E1216 11:56:30.522993 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.589642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.589680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.589691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.589707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.589718 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:30Z","lastTransitionTime":"2025-12-16T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.692926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.692977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.692991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.693011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.693024 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:30Z","lastTransitionTime":"2025-12-16T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.795831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.795892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.795902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.795923 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.795935 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:30Z","lastTransitionTime":"2025-12-16T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.898381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.898407 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.898415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.898427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:30 crc kubenswrapper[4805]: I1216 11:56:30.898436 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:30Z","lastTransitionTime":"2025-12-16T11:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.001210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.001242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.001251 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.001266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.001274 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:31Z","lastTransitionTime":"2025-12-16T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.104176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.104215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.104223 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.104241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.104252 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:31Z","lastTransitionTime":"2025-12-16T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.207256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.207310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.207320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.207338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.207349 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:31Z","lastTransitionTime":"2025-12-16T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.310886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.310922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.310934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.310950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.310961 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:31Z","lastTransitionTime":"2025-12-16T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.413369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.413410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.413421 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.413439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.413450 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:31Z","lastTransitionTime":"2025-12-16T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.515810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.515843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.515851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.515865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.515873 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:31Z","lastTransitionTime":"2025-12-16T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.522260 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.522272 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:31 crc kubenswrapper[4805]: E1216 11:56:31.522392 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:31 crc kubenswrapper[4805]: E1216 11:56:31.522461 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.618123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.618180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.618192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.618206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.618216 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:31Z","lastTransitionTime":"2025-12-16T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.720078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.720111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.720123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.720172 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.720187 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:31Z","lastTransitionTime":"2025-12-16T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.822425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.822484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.822495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.822516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.822531 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:31Z","lastTransitionTime":"2025-12-16T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.925403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.925443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.925455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.925470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:31 crc kubenswrapper[4805]: I1216 11:56:31.925480 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:31Z","lastTransitionTime":"2025-12-16T11:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.027186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.027245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.027266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.027289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.027306 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:32Z","lastTransitionTime":"2025-12-16T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.130350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.130394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.130405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.130425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.130437 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:32Z","lastTransitionTime":"2025-12-16T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.233388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.233463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.233478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.233499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.233512 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:32Z","lastTransitionTime":"2025-12-16T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.335982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.336023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.336031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.336048 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.336059 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:32Z","lastTransitionTime":"2025-12-16T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.444597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.444649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.444697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.444718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.444730 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:32Z","lastTransitionTime":"2025-12-16T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.522482 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.522527 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:32 crc kubenswrapper[4805]: E1216 11:56:32.522801 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:32 crc kubenswrapper[4805]: E1216 11:56:32.522848 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.534303 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.548609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.548650 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.548660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.548675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.548687 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:32Z","lastTransitionTime":"2025-12-16T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.651303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.651337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.651347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.651361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.651408 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:32Z","lastTransitionTime":"2025-12-16T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.753698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.753764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.753777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.753796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.753808 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:32Z","lastTransitionTime":"2025-12-16T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.855763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.855806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.855820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.855836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.855848 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:32Z","lastTransitionTime":"2025-12-16T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.958644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.958698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.958711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.958731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:32 crc kubenswrapper[4805]: I1216 11:56:32.958742 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:32Z","lastTransitionTime":"2025-12-16T11:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.061016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.061055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.061067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.061085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.061096 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:33Z","lastTransitionTime":"2025-12-16T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.163500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.163556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.163567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.163582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.163594 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:33Z","lastTransitionTime":"2025-12-16T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.265822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.265854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.265862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.265875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.265883 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:33Z","lastTransitionTime":"2025-12-16T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.368280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.368331 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.368341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.368361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.368372 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:33Z","lastTransitionTime":"2025-12-16T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.471165 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.471201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.471211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.471226 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.471235 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:33Z","lastTransitionTime":"2025-12-16T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.521675 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.521704 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:33 crc kubenswrapper[4805]: E1216 11:56:33.521875 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:33 crc kubenswrapper[4805]: E1216 11:56:33.521905 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.573993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.574033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.574043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.574057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.574067 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:33Z","lastTransitionTime":"2025-12-16T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.676775 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.676808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.676817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.676831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.676840 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:33Z","lastTransitionTime":"2025-12-16T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.779245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.779287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.779300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.779318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.779330 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:33Z","lastTransitionTime":"2025-12-16T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.882235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.882285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.882298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.882318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.882328 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:33Z","lastTransitionTime":"2025-12-16T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.984462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.984505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.984517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.984533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:33 crc kubenswrapper[4805]: I1216 11:56:33.984543 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:33Z","lastTransitionTime":"2025-12-16T11:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.086608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.086703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.086715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.086747 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.086757 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:34Z","lastTransitionTime":"2025-12-16T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.188971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.189008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.189017 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.189032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.189042 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:34Z","lastTransitionTime":"2025-12-16T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.291897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.291946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.291959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.291976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.291988 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:34Z","lastTransitionTime":"2025-12-16T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.394899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.394930 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.394940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.394957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.394968 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:34Z","lastTransitionTime":"2025-12-16T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.497482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.497537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.497546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.497560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.497573 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:34Z","lastTransitionTime":"2025-12-16T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.521705 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.521759 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:34 crc kubenswrapper[4805]: E1216 11:56:34.521847 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:34 crc kubenswrapper[4805]: E1216 11:56:34.521982 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.600466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.600509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.600520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.600534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.600546 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:34Z","lastTransitionTime":"2025-12-16T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.702992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.703032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.703043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.703060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.703097 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:34Z","lastTransitionTime":"2025-12-16T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.750861 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:34 crc kubenswrapper[4805]: E1216 11:56:34.751091 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:34 crc kubenswrapper[4805]: E1216 11:56:34.751189 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs podName:2cc0dc93-5341-4b1e-840d-2c0c951ff142 nodeName:}" failed. No retries permitted until 2025-12-16 11:57:06.751167627 +0000 UTC m=+100.469425472 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs") pod "network-metrics-daemon-ct6d8" (UID: "2cc0dc93-5341-4b1e-840d-2c0c951ff142") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.805093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.805133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.805162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.805175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.805185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:34Z","lastTransitionTime":"2025-12-16T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.908023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.908076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.908087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.908109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:34 crc kubenswrapper[4805]: I1216 11:56:34.908119 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:34Z","lastTransitionTime":"2025-12-16T11:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.010264 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.010295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.010313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.010327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.010336 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:35Z","lastTransitionTime":"2025-12-16T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.113547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.113617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.113631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.113648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.113669 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:35Z","lastTransitionTime":"2025-12-16T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.216360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.216449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.216463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.216479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.216488 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:35Z","lastTransitionTime":"2025-12-16T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.318790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.318821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.318829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.318842 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.318850 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:35Z","lastTransitionTime":"2025-12-16T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.421178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.421221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.421231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.421250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.421260 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:35Z","lastTransitionTime":"2025-12-16T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.522362 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.522418 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:35 crc kubenswrapper[4805]: E1216 11:56:35.522523 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:35 crc kubenswrapper[4805]: E1216 11:56:35.522848 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.524121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.524177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.524189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.524204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.524217 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:35Z","lastTransitionTime":"2025-12-16T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.627592 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.627644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.627656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.627675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.627687 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:35Z","lastTransitionTime":"2025-12-16T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.731479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.731530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.731545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.731568 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.731586 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:35Z","lastTransitionTime":"2025-12-16T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.834156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.834450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.834525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.834623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.834713 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:35Z","lastTransitionTime":"2025-12-16T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.897288 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vffbc_369287d8-0d6d-483f-8c4b-5439ae4d065c/kube-multus/0.log" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.897335 4805 generic.go:334] "Generic (PLEG): container finished" podID="369287d8-0d6d-483f-8c4b-5439ae4d065c" containerID="a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879" exitCode=1 Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.897367 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vffbc" event={"ID":"369287d8-0d6d-483f-8c4b-5439ae4d065c","Type":"ContainerDied","Data":"a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.897736 4805 scope.go:117] "RemoveContainer" containerID="a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.911779 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:35Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.925275 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:35Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.937858 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:35Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.938272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.938295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.938302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.938320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.938337 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:35Z","lastTransitionTime":"2025-12-16T11:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.952691 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:35Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.971729 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:04Z\\\",\\\"message\\\":\\\"ns/node-resolver-btjs7 openshift-kube-apiserver/kube-apiserver-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g openshift-image-registry/node-ca-br4tl openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1216 11:56:04.036521 6235 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1216 11:56:04.036537 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036580 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036596 6235 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1216 11:56:04.036603 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1216 11:56:04.036614 6235 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036629 6235 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1216 11:56:04.036691 6235 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:22Z\\\",\\\"message\\\":\\\"Set:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543106 6416 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543067 6416 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:22.543176 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1216 11:56:22.543130 6416 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1216 11:56:22.543258 6416 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:35Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.983062 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:35Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:35 crc kubenswrapper[4805]: I1216 11:56:35.995747 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:35Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.010061 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.022108 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.036367 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.040023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.040068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.040079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.040092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.040101 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:36Z","lastTransitionTime":"2025-12-16T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.048997 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:35Z\\\",\\\"message\\\":\\\"2025-12-16T11:55:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_96c59c26-372b-4850-96c9-0e8a149cdc4e\\\\n2025-12-16T11:55:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_96c59c26-372b-4850-96c9-0e8a149cdc4e to /host/opt/cni/bin/\\\\n2025-12-16T11:55:50Z [verbose] multus-daemon started\\\\n2025-12-16T11:55:50Z [verbose] Readiness Indicator file check\\\\n2025-12-16T11:56:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.061361 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd7db4d-e488-4922-a120-7bf5242ef90f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f08fe672a7cdd5082cea8ed2cacc5c71754493f2470a7a81fa6baff03eee5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ed200ae81bc9ecf0225a69d59eb4385bcba5823c87abc590d781340c60490f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e366188c71f88353931b92e19bc6a1c2b58eb3a55bf872be24918600a1f38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.081902 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.094905 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.105444 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13bc253-a448-40ef-b252-1f47074986e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e9f3d773e7d01ae06a38b49b8e51be1fc9036229525ba27b0c1bd27078555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.117676 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.129290 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.140900 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.142008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.142044 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.142056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.142070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.142080 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:36Z","lastTransitionTime":"2025-12-16T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.153087 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.246731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.246762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.246770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.246785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.246795 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:36Z","lastTransitionTime":"2025-12-16T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.348580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.348623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.348634 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.348651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.348662 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:36Z","lastTransitionTime":"2025-12-16T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.452250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.452331 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.452350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.452377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.452392 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:36Z","lastTransitionTime":"2025-12-16T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.522617 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.522942 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:36 crc kubenswrapper[4805]: E1216 11:56:36.523092 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:36 crc kubenswrapper[4805]: E1216 11:56:36.523297 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.524002 4805 scope.go:117] "RemoveContainer" containerID="9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57" Dec 16 11:56:36 crc kubenswrapper[4805]: E1216 11:56:36.524279 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lwjrh_openshift-ovn-kubernetes(cb7da1ad-f74d-471f-a98f-274cef7fe393)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.538273 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.555448 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.612116 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.621420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.621481 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.621493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.621515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.621529 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:36Z","lastTransitionTime":"2025-12-16T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.643242 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.664374 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3cef23d46894fcfe13bfaf19d2a491fadf53c2dd0163a46e8f5167c3ce65783\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:04Z\\\",\\\"message\\\":\\\"ns/node-resolver-btjs7 openshift-kube-apiserver/kube-apiserver-crc openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g openshift-image-registry/node-ca-br4tl openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1216 11:56:04.036521 6235 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1216 11:56:04.036537 6235 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036580 6235 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036596 6235 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1216 11:56:04.036603 6235 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1216 11:56:04.036614 6235 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1216 11:56:04.036629 6235 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1216 11:56:04.036691 6235 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:22Z\\\",\\\"message\\\":\\\"Set:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543106 6416 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543067 6416 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:22.543176 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1216 11:56:22.543130 6416 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1216 11:56:22.543258 6416 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.684901 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.710217 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.727698 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.731329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.731375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.731388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.731409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.731419 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:36Z","lastTransitionTime":"2025-12-16T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.743238 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.757125 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.773105 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:35Z\\\",\\\"message\\\":\\\"2025-12-16T11:55:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_96c59c26-372b-4850-96c9-0e8a149cdc4e\\\\n2025-12-16T11:55:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_96c59c26-372b-4850-96c9-0e8a149cdc4e to /host/opt/cni/bin/\\\\n2025-12-16T11:55:50Z [verbose] multus-daemon started\\\\n2025-12-16T11:55:50Z [verbose] Readiness Indicator file check\\\\n2025-12-16T11:56:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.792679 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd7db4d-e488-4922-a120-7bf5242ef90f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f08fe672a7cdd5082cea8ed2cacc5c71754493f2470a7a81fa6baff03eee5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ed200ae81bc9ecf0225a69d59eb4385bcba5823c87abc590d781340c60490f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e366188c71f88353931b92e19bc6a1c2b58eb3a55bf872be24918600a1f38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.820277 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.834990 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.848239 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13bc253-a448-40ef-b252-1f47074986e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e9f3d773e7d01ae06a38b49b8e51be1fc9036229525ba27b0c1bd27078555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.863109 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.876551 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.888496 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.900985 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.915769 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:35Z\\\",\\\"message\\\":\\\"2025-12-16T11:55:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_96c59c26-372b-4850-96c9-0e8a149cdc4e\\\\n2025-12-16T11:55:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_96c59c26-372b-4850-96c9-0e8a149cdc4e to /host/opt/cni/bin/\\\\n2025-12-16T11:55:50Z [verbose] multus-daemon started\\\\n2025-12-16T11:55:50Z [verbose] Readiness Indicator file check\\\\n2025-12-16T11:56:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.926038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.926078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.926086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.926103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.926117 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:36Z","lastTransitionTime":"2025-12-16T11:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.929512 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vffbc_369287d8-0d6d-483f-8c4b-5439ae4d065c/kube-multus/0.log" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.929584 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vffbc" event={"ID":"369287d8-0d6d-483f-8c4b-5439ae4d065c","Type":"ContainerStarted","Data":"d216456ebbb67a5782522dbfd434d574659cd483a4a2ca25f3a7a5bc963f8bd3"} Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.930914 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.945472 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.960372 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.976158 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:36 crc kubenswrapper[4805]: I1216 11:56:36.991713 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd7db4d-e488-4922-a120-7bf5242ef90f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f08fe672a7cdd5082cea8ed2cacc5c71754493f2470a7a81fa6baff03eee5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ed200ae81bc9ecf0225a69d59eb4385bcba5823c87abc590d781340c60490f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e366188c71f88353931b92e19bc6a1c2b58eb3a55bf872be24918600a1f38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:36Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.013688 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.028569 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.029255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.029278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.029288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.029303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.029315 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:37Z","lastTransitionTime":"2025-12-16T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.041604 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.054495 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13bc253-a448-40ef-b252-1f47074986e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e9f3d773e7d01ae06a38b49b8e51be1fc9036229525ba27b0c1bd27078555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.068255 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.084067 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.095691 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.118572 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:22Z\\\",\\\"message\\\":\\\"Set:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543106 6416 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543067 6416 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:22.543176 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1216 11:56:22.543130 6416 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1216 11:56:22.543258 6416 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lwjrh_openshift-ovn-kubernetes(cb7da1ad-f74d-471f-a98f-274cef7fe393)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.130905 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.131522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.131555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.131564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.131581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.131590 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:37Z","lastTransitionTime":"2025-12-16T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.143783 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.161644 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.173939 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.187757 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.198620 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd7db4d-e488-4922-a120-7bf5242ef90f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f08fe672a7cdd5082cea8ed2cacc5c71754493f2470a7a81fa6baff03eee5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ed200ae81bc9ecf0225a69d59eb4385bcba5823c87abc590d781340c60490f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e366188c71f88353931b92e19bc6a1c2b58eb3a55bf872be24918600a1f38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.218582 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.231360 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.233724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.233762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.233773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.233791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.233802 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:37Z","lastTransitionTime":"2025-12-16T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.243348 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13bc253-a448-40ef-b252-1f47074986e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e9f3d773e7d01ae06a38b49b8e51be1fc9036229525ba27b0c1bd27078555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.258002 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.269892 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.281983 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.295065 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.308667 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.321311 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.335375 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.336506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.336592 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.336603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.336619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.336630 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:37Z","lastTransitionTime":"2025-12-16T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.351477 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.369487 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:22Z\\\",\\\"message\\\":\\\"Set:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543106 6416 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543067 6416 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:22.543176 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1216 11:56:22.543130 6416 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1216 11:56:22.543258 6416 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lwjrh_openshift-ovn-kubernetes(cb7da1ad-f74d-471f-a98f-274cef7fe393)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.381648 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.396407 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.412560 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.427877 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.438200 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.439664 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.439784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.439873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.439958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.440045 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:37Z","lastTransitionTime":"2025-12-16T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.454605 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d216456ebbb67a5782522dbfd434d574659cd483a4a2ca25f3a7a5bc963f8bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:35Z\\\",\\\"message\\\":\\\"2025-12-16T11:55:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_96c59c26-372b-4850-96c9-0e8a149cdc4e\\\\n2025-12-16T11:55:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_96c59c26-372b-4850-96c9-0e8a149cdc4e to /host/opt/cni/bin/\\\\n2025-12-16T11:55:50Z [verbose] multus-daemon started\\\\n2025-12-16T11:55:50Z [verbose] Readiness Indicator file check\\\\n2025-12-16T11:56:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:37Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.521676 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.521676 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:37 crc kubenswrapper[4805]: E1216 11:56:37.521821 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:37 crc kubenswrapper[4805]: E1216 11:56:37.521865 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.542269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.542296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.542305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.542318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.542327 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:37Z","lastTransitionTime":"2025-12-16T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.644070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.644116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.644129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.644164 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.644175 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:37Z","lastTransitionTime":"2025-12-16T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.747019 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.747056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.747067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.747085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.747109 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:37Z","lastTransitionTime":"2025-12-16T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.849713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.849743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.849751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.849764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.849789 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:37Z","lastTransitionTime":"2025-12-16T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.951865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.951904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.951914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.951932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:37 crc kubenswrapper[4805]: I1216 11:56:37.951943 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:37Z","lastTransitionTime":"2025-12-16T11:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.054431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.054463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.054473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.054488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.054507 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.156739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.156790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.156832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.156849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.156862 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.259594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.259637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.259671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.259688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.259700 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.362159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.362191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.362202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.362218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.362229 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.404660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.404689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.404700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.404715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.404725 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: E1216 11:56:38.417527 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:38Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.421609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.421661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.421676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.421693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.421707 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: E1216 11:56:38.434565 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:38Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.439501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.439560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.439582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.439612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.439633 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: E1216 11:56:38.460430 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:38Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.463693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.463720 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.463731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.463746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.463757 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: E1216 11:56:38.476211 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:38Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.479618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.479645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.479656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.479672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.479683 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: E1216 11:56:38.492945 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:38Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:38 crc kubenswrapper[4805]: E1216 11:56:38.493109 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.494881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.494910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.494923 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.494939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.494950 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.523326 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:38 crc kubenswrapper[4805]: E1216 11:56:38.523442 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.523486 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:38 crc kubenswrapper[4805]: E1216 11:56:38.523606 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.597202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.597237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.597247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.597269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.597279 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.699315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.699361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.699375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.699392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.699405 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.801643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.801683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.801691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.801706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.801717 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.903758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.903801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.903813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.903832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:38 crc kubenswrapper[4805]: I1216 11:56:38.903847 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:38Z","lastTransitionTime":"2025-12-16T11:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.006230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.006271 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.006282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.006295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.006305 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:39Z","lastTransitionTime":"2025-12-16T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.108462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.108488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.108496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.108508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.108519 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:39Z","lastTransitionTime":"2025-12-16T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.210742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.210776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.210786 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.210800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.210810 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:39Z","lastTransitionTime":"2025-12-16T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.313550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.313585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.313593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.313608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.313617 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:39Z","lastTransitionTime":"2025-12-16T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.415505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.415547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.415561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.415578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.415590 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:39Z","lastTransitionTime":"2025-12-16T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.518181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.518229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.518238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.518252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.518261 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:39Z","lastTransitionTime":"2025-12-16T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.522457 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.522543 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:39 crc kubenswrapper[4805]: E1216 11:56:39.522576 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:39 crc kubenswrapper[4805]: E1216 11:56:39.522635 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.620649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.620681 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.620706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.620721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.620730 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:39Z","lastTransitionTime":"2025-12-16T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.722846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.722888 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.722898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.722914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.722927 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:39Z","lastTransitionTime":"2025-12-16T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.825651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.825704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.825723 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.825745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.825761 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:39Z","lastTransitionTime":"2025-12-16T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.928732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.928835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.928846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.928892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:39 crc kubenswrapper[4805]: I1216 11:56:39.928902 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:39Z","lastTransitionTime":"2025-12-16T11:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.031374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.031436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.031446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.031461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.031470 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:40Z","lastTransitionTime":"2025-12-16T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.134263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.134307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.134316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.134334 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.134347 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:40Z","lastTransitionTime":"2025-12-16T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.237354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.237402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.237417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.237434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.237445 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:40Z","lastTransitionTime":"2025-12-16T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.340312 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.340356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.340371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.340389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.340402 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:40Z","lastTransitionTime":"2025-12-16T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.442996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.443038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.443047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.443062 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.443073 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:40Z","lastTransitionTime":"2025-12-16T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.522733 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:40 crc kubenswrapper[4805]: E1216 11:56:40.522890 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.522935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:40 crc kubenswrapper[4805]: E1216 11:56:40.523083 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.546040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.546076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.546084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.546096 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.546104 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:40Z","lastTransitionTime":"2025-12-16T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.650466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.650552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.650567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.650588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.650611 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:40Z","lastTransitionTime":"2025-12-16T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.754188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.754237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.754245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.754259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.754268 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:40Z","lastTransitionTime":"2025-12-16T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.856940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.856989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.857005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.857027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.857050 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:40Z","lastTransitionTime":"2025-12-16T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.961015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.961069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.961083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.961102 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:40 crc kubenswrapper[4805]: I1216 11:56:40.961117 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:40Z","lastTransitionTime":"2025-12-16T11:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.063781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.063883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.063899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.063926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.063941 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:41Z","lastTransitionTime":"2025-12-16T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.167300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.167348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.167360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.167376 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.167388 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:41Z","lastTransitionTime":"2025-12-16T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.270006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.270040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.270049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.270064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.270073 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:41Z","lastTransitionTime":"2025-12-16T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.372646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.372698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.372710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.372726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.373057 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:41Z","lastTransitionTime":"2025-12-16T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.475373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.475404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.475413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.475428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.475444 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:41Z","lastTransitionTime":"2025-12-16T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.521804 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.521915 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:41 crc kubenswrapper[4805]: E1216 11:56:41.521959 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:41 crc kubenswrapper[4805]: E1216 11:56:41.521996 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.577861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.577900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.577910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.577924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.577933 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:41Z","lastTransitionTime":"2025-12-16T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.680794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.680847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.680860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.680878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.680889 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:41Z","lastTransitionTime":"2025-12-16T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.783110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.783177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.783188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.783201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.783210 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:41Z","lastTransitionTime":"2025-12-16T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.885474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.885509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.885518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.885530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.885541 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:41Z","lastTransitionTime":"2025-12-16T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.988224 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.988253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.988261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.988275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:41 crc kubenswrapper[4805]: I1216 11:56:41.988284 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:41Z","lastTransitionTime":"2025-12-16T11:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.090987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.091056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.091068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.091087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.091101 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:42Z","lastTransitionTime":"2025-12-16T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.194078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.194166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.194184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.194237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.194252 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:42Z","lastTransitionTime":"2025-12-16T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.296848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.296940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.296961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.297026 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.297047 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:42Z","lastTransitionTime":"2025-12-16T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.399286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.399327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.399336 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.399351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.399361 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:42Z","lastTransitionTime":"2025-12-16T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.501999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.502082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.502094 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.502135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.502173 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:42Z","lastTransitionTime":"2025-12-16T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.522041 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.522116 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:42 crc kubenswrapper[4805]: E1216 11:56:42.522197 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:42 crc kubenswrapper[4805]: E1216 11:56:42.522351 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.604349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.604392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.604400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.604415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.604424 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:42Z","lastTransitionTime":"2025-12-16T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.707390 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.707429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.707439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.707451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.707461 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:42Z","lastTransitionTime":"2025-12-16T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.809642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.809682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.809693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.809710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.809719 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:42Z","lastTransitionTime":"2025-12-16T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.912422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.912466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.912482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.912498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:42 crc kubenswrapper[4805]: I1216 11:56:42.912509 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:42Z","lastTransitionTime":"2025-12-16T11:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.014900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.014939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.014953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.014969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.014980 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:43Z","lastTransitionTime":"2025-12-16T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.118025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.118086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.118103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.118126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.118189 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:43Z","lastTransitionTime":"2025-12-16T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.220673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.220750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.220761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.220820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.220832 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:43Z","lastTransitionTime":"2025-12-16T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.322541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.322577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.322586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.322609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.322619 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:43Z","lastTransitionTime":"2025-12-16T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.425688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.425742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.425755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.425772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.425783 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:43Z","lastTransitionTime":"2025-12-16T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.522060 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:43 crc kubenswrapper[4805]: E1216 11:56:43.522254 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.522413 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:43 crc kubenswrapper[4805]: E1216 11:56:43.522591 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.528329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.529030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.529064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.529085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.529095 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:43Z","lastTransitionTime":"2025-12-16T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.631936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.631989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.632005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.632026 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.632043 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:43Z","lastTransitionTime":"2025-12-16T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.739823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.739853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.739862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.739876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.739884 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:43Z","lastTransitionTime":"2025-12-16T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.842289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.842372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.842385 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.842401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.842428 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:43Z","lastTransitionTime":"2025-12-16T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.945525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.945565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.945591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.945613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:43 crc kubenswrapper[4805]: I1216 11:56:43.945625 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:43Z","lastTransitionTime":"2025-12-16T11:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.048912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.048958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.048967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.048982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.048992 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:44Z","lastTransitionTime":"2025-12-16T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.151624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.151669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.151678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.151693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.151706 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:44Z","lastTransitionTime":"2025-12-16T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.254483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.254522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.254532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.254547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.254557 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:44Z","lastTransitionTime":"2025-12-16T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.357204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.357257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.357269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.357286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.357297 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:44Z","lastTransitionTime":"2025-12-16T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.459089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.459154 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.459163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.459176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.459185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:44Z","lastTransitionTime":"2025-12-16T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.522534 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:44 crc kubenswrapper[4805]: E1216 11:56:44.522719 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.522535 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:44 crc kubenswrapper[4805]: E1216 11:56:44.522813 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.561023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.561048 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.561057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.561074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.561092 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:44Z","lastTransitionTime":"2025-12-16T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.663696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.663746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.663759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.663779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.663790 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:44Z","lastTransitionTime":"2025-12-16T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.766411 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.766463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.766476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.766493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.766506 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:44Z","lastTransitionTime":"2025-12-16T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.869539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.869590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.869601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.869619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.869633 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:44Z","lastTransitionTime":"2025-12-16T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.971987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.972080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.972104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.972177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:44 crc kubenswrapper[4805]: I1216 11:56:44.972203 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:44Z","lastTransitionTime":"2025-12-16T11:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.075731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.075790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.075802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.075820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.075832 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:45Z","lastTransitionTime":"2025-12-16T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.178206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.178282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.178295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.178312 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.178323 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:45Z","lastTransitionTime":"2025-12-16T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.281702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.281754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.281766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.281792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.281805 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:45Z","lastTransitionTime":"2025-12-16T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.385401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.385458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.385468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.385491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.385504 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:45Z","lastTransitionTime":"2025-12-16T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.488488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.488544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.488557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.488584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.488599 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:45Z","lastTransitionTime":"2025-12-16T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.521662 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:45 crc kubenswrapper[4805]: E1216 11:56:45.521815 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.521902 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:45 crc kubenswrapper[4805]: E1216 11:56:45.522123 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.592727 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.592820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.592833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.592856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.592869 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:45Z","lastTransitionTime":"2025-12-16T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.696369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.696405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.696419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.696435 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.696448 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:45Z","lastTransitionTime":"2025-12-16T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.799272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.799320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.799332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.799350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.799363 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:45Z","lastTransitionTime":"2025-12-16T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.902827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.902879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.902890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.902911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:45 crc kubenswrapper[4805]: I1216 11:56:45.902925 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:45Z","lastTransitionTime":"2025-12-16T11:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.005363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.005402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.005413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.005425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.005435 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:46Z","lastTransitionTime":"2025-12-16T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.108351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.108437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.108451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.108479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.108495 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:46Z","lastTransitionTime":"2025-12-16T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.211997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.212299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.212374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.212448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.212515 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:46Z","lastTransitionTime":"2025-12-16T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.315745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.315793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.315803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.315819 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.315832 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:46Z","lastTransitionTime":"2025-12-16T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.418054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.418779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.418879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.418980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.419061 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:46Z","lastTransitionTime":"2025-12-16T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.521718 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.521808 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:46 crc kubenswrapper[4805]: E1216 11:56:46.521940 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:46 crc kubenswrapper[4805]: E1216 11:56:46.522302 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.525194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.525609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.525817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.525983 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.526106 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:46Z","lastTransitionTime":"2025-12-16T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.537241 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28ecaa42-ea80-45ad-af58-4ae0b0c48ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW1216 11:55:43.343587 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1216 11:55:43.343702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1216 11:55:43.344686 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4090580050/tls.crt::/tmp/serving-cert-4090580050/tls.key\\\\\\\"\\\\nI1216 11:55:44.185399 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1216 11:55:44.187538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1216 11:55:44.187556 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1216 11:55:44.187573 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1216 11:55:44.187578 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1216 11:55:44.191749 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1216 11:55:44.191770 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191774 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1216 11:55:44.191778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1216 11:55:44.191782 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1216 11:55:44.191785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1216 11:55:44.191789 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1216 11:55:44.192196 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1216 11:55:44.196802 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.559090 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40ff2c6702afaa14fa95deb43473e8984ad734fb084616d1646c85a844dd7f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.573712 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aff22ac90b0fe150940a41f3ef5135a5cf9de11f08fb184f9c36c9f6d741199\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55975f8e60ef1887d6588cfa7b7080a97b915858c98022e2a1d4b8b5b9341409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.587662 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-btjs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b42062-39de-4c28-b54d-5c63d046cb6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230adcb96367b08bc87ae98f1392e6729e5c146a68a17c57b75f7149a0afc3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpdpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-btjs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.603579 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vffbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"369287d8-0d6d-483f-8c4b-5439ae4d065c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d216456ebbb67a5782522dbfd434d574659cd483a4a2ca25f3a7a5bc963f8bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:35Z\\\",\\\"message\\\":\\\"2025-12-16T11:55:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_96c59c26-372b-4850-96c9-0e8a149cdc4e\\\\n2025-12-16T11:55:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_96c59c26-372b-4850-96c9-0e8a149cdc4e to /host/opt/cni/bin/\\\\n2025-12-16T11:55:50Z [verbose] multus-daemon started\\\\n2025-12-16T11:55:50Z [verbose] Readiness Indicator file check\\\\n2025-12-16T11:56:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz56s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vffbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.617482 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd7db4d-e488-4922-a120-7bf5242ef90f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f08fe672a7cdd5082cea8ed2cacc5c71754493f2470a7a81fa6baff03eee5c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11ed200ae81bc9ecf0225a69d59eb4385bcba5823c87abc590d781340c60490f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e366188c71f88353931b92e19bc6a1c2b58eb3a55bf872be24918600a1f38e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c91decf078a314e3ddc17454eb3699d1a3f7a4a3e1a2b4c003c5cc116e37155\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.628948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.629008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.629021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.629049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.629063 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:46Z","lastTransitionTime":"2025-12-16T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.641222 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"564ab9d5-81e6-4c2c-8809-b25e08668162\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d4f01dbf6b49c72be14ff4f1afe34d1e7a9d6f318065a5e647cd54e310cd828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c132d37987ae6f9a943117f3adecefc87c19b50d0e7d943a7248346d2387623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80fcbd1eb5b31ba5e942c684dfc8caefc85932870f61a19c3c85aa657ea5a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74db3dd9512f969777ea718add0bd1d6e41f1146dc6fda96c68b91d07c03bd7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d808d53ca97db9213412771a77cd19b413812774247d717f3439b09e04f4c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04f6b2eab750a5901a5309766e6e0cfe9a754d4f9a39e0f4d24c9c2901f300f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c087c9ac4943a38c283664303c3a4fc6a17e0e76443f2675187ce5850022033a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f951494796ea86d5d5064b99ffba92777e82e904951b33e332cd2e53b997f542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.656388 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67e2714181e1f49593a932c03a8eb183d66ab3ef0a33cd3f133470623d36aa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwvnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5gm98\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.714383 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13bc253-a448-40ef-b252-1f47074986e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc9e9f3d773e7d01ae06a38b49b8e51be1fc9036229525ba27b0c1bd27078555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d5ae9c99b0d1c6d9488eae6d4be7254fab20de4ec58012551e11e486453d2d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.731680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.732110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.732210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.732286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.732347 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:46Z","lastTransitionTime":"2025-12-16T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.734953 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.748697 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8b07bc87768e3cecb686197152953a6b79cc8e5e0b4b9caca6fef13e180fc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.762553 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-br4tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3f7246-ff42-4e43-a0fd-89ec30096be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55050a53665f4775e03edd7a437c2704a8cb3688cefb6a407065581772d015e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24cbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-br4tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.778402 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deff7b04-18d2-47cb-ae70-e5b2b70367bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60be558a97b539bbd7e3af0978d8cc5990fc454cd1ecc645a630ed79d37a559d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de16b60d1407f22c9f98ba265ccc920af29b2851cae8ded82e4e2ca4fdc0a47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zl99g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.793705 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab8404b0-cb69-4ed2-8d72-2ea5b54b9954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://479a3dc781824f9e14adab989897d96d5f5704dbbd9d451da11cfb237be075e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a78e0bcb26c9874c5aaef57f2110701fff2f4764264298047e5a10d441e907a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f39329ea2c8fc519a60ffeaeea20323d775652fc7e63d1ebf73134eebcd081f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.807658 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.821547 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.834799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.834860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.834875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.834895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.834905 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:46Z","lastTransitionTime":"2025-12-16T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.841589 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qljjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22790094-9e12-4de0-a0bf-5300bed8938f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97bf3da78bc2d814371b6b485b95ac033d0b99e97f5df447a8649131189c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb17fade102aea267cae7f1cbbf24f5d1d96b88ab558cbc4e34c01be854b255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5f6d0bcf50a6ad793c64648566b6e29df695d71a8928ba934b415da2d3a6af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://116f25869b6a81d2bff185ea41d81f2e99b28ece78da217b6f466483954de77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://069b6e07d9717af0230e7e88deef8113bd7f52c8c1f8c3801f011e579b393150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc46f55978d64745c6f3854415c10f5b0c7f426f17713d86c9f79b18c137e67\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa51d5c4e3ecee9a83a1fcc2e0cb128005bc71b1e5696b4328f578e7eb0ade6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qljjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.864465 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb7da1ad-f74d-471f-a98f-274cef7fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-16T11:56:22Z\\\",\\\"message\\\":\\\"Set:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543106 6416 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1216 11:56:22.543067 6416 ovnkube.go:599] Stopped ovnkube\\\\nI1216 11:56:22.543176 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1216 11:56:22.543130 6416 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF1216 11:56:22.543258 6416 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-16T11:56:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lwjrh_openshift-ovn-kubernetes(cb7da1ad-f74d-471f-a98f-274cef7fe393)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-16T11:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-16T11:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-16T11:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlg5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:55:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lwjrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.877427 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc0dc93-5341-4b1e-840d-2c0c951ff142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pb6q4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-16T11:56:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ct6d8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:46Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.937867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.937936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.937954 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.937977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:46 crc kubenswrapper[4805]: I1216 11:56:46.937989 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:46Z","lastTransitionTime":"2025-12-16T11:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.041306 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.041348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.041358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.041371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.041383 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:47Z","lastTransitionTime":"2025-12-16T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.143425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.143464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.143475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.143490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.143500 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:47Z","lastTransitionTime":"2025-12-16T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.248569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.248622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.248631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.248645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.248654 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:47Z","lastTransitionTime":"2025-12-16T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.351218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.351291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.351303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.351318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.351330 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:47Z","lastTransitionTime":"2025-12-16T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.454289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.454345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.454358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.454376 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.454388 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:47Z","lastTransitionTime":"2025-12-16T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.521901 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.521977 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.522041 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.522176 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.529582 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.529762 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.529773 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:51.529749609 +0000 UTC m=+145.248007424 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.529892 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.529837 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.529978 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.529940 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.529984 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:57:51.529975095 +0000 UTC m=+145.248232980 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.530048 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.530074 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.530094 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-16 11:57:51.530086518 +0000 UTC m=+145.248344413 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.530101 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.530120 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.530164 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.530182 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.530193 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.530230 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-16 11:57:51.530209492 +0000 UTC m=+145.248467337 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:56:47 crc kubenswrapper[4805]: E1216 11:56:47.530258 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-16 11:57:51.530244853 +0000 UTC m=+145.248502688 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.558567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.558607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.558618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.558631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.558641 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:47Z","lastTransitionTime":"2025-12-16T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.660823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.660885 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.660895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.660908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.660919 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:47Z","lastTransitionTime":"2025-12-16T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.764206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.764302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.764319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.764338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.764349 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:47Z","lastTransitionTime":"2025-12-16T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.867496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.867552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.867581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.867604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.867618 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:47Z","lastTransitionTime":"2025-12-16T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.969983 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.970037 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.970047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.970064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:47 crc kubenswrapper[4805]: I1216 11:56:47.970077 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:47Z","lastTransitionTime":"2025-12-16T11:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.072439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.072496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.072513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.072536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.072553 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.175711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.176025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.176103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.176206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.176278 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.279467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.279537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.279555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.279580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.279607 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.381910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.382256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.382367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.382540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.382652 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.485305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.485339 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.485348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.485362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.485372 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.521855 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.521982 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:48 crc kubenswrapper[4805]: E1216 11:56:48.522372 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:48 crc kubenswrapper[4805]: E1216 11:56:48.522597 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.527095 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.527178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.527192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.527213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.527229 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: E1216 11:56:48.541003 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:48Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.545530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.545573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.545587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.545608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.545623 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: E1216 11:56:48.562640 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:48Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.567295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.567350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.567365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.567387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.567403 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: E1216 11:56:48.584905 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:48Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.589578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.589643 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.589655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.589871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.589887 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: E1216 11:56:48.605514 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:48Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.609869 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.609904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.609913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.609928 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.609939 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: E1216 11:56:48.632296 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-16T11:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37a7c343-177b-430b-a5cf-e05c308f6740\\\",\\\"systemUUID\\\":\\\"7d5db2bf-38e9-4d90-94de-b757ce8f553c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-16T11:56:48Z is after 2025-08-24T17:21:41Z" Dec 16 11:56:48 crc kubenswrapper[4805]: E1216 11:56:48.632578 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.634356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.634382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.634392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.634408 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.634420 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.737042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.737085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.737109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.737124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.737134 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.839844 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.839945 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.840003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.840032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.840092 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.942789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.942834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.942845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.942862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:48 crc kubenswrapper[4805]: I1216 11:56:48.942873 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:48Z","lastTransitionTime":"2025-12-16T11:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.045472 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.045549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.045574 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.045602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.045624 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:49Z","lastTransitionTime":"2025-12-16T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.147847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.147896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.147908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.147924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.147936 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:49Z","lastTransitionTime":"2025-12-16T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.249583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.249632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.249644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.249661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.249675 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:49Z","lastTransitionTime":"2025-12-16T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.352085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.352190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.352206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.352224 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.352237 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:49Z","lastTransitionTime":"2025-12-16T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.454767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.454812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.454828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.454848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.454863 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:49Z","lastTransitionTime":"2025-12-16T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.522409 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.522514 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:49 crc kubenswrapper[4805]: E1216 11:56:49.522620 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:49 crc kubenswrapper[4805]: E1216 11:56:49.522843 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.557699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.558212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.558296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.558399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.558483 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:49Z","lastTransitionTime":"2025-12-16T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.661457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.661700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.661763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.661829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.661894 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:49Z","lastTransitionTime":"2025-12-16T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.765087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.765508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.765518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.765535 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.765547 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:49Z","lastTransitionTime":"2025-12-16T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.867931 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.867985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.868002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.868025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.868042 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:49Z","lastTransitionTime":"2025-12-16T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.970995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.971054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.971068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.971087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:49 crc kubenswrapper[4805]: I1216 11:56:49.971100 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:49Z","lastTransitionTime":"2025-12-16T11:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.073576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.073636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.073648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.073667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.073679 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:50Z","lastTransitionTime":"2025-12-16T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.177754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.177814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.177832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.177855 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.177869 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:50Z","lastTransitionTime":"2025-12-16T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.281081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.281124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.281286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.281320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.281336 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:50Z","lastTransitionTime":"2025-12-16T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.383881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.383942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.383952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.383967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.383977 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:50Z","lastTransitionTime":"2025-12-16T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.486014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.486059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.486071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.486091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.486104 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:50Z","lastTransitionTime":"2025-12-16T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.522615 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.522678 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:50 crc kubenswrapper[4805]: E1216 11:56:50.522752 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:50 crc kubenswrapper[4805]: E1216 11:56:50.523014 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.523662 4805 scope.go:117] "RemoveContainer" containerID="9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.588196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.588229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.588252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.588264 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.588274 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:50Z","lastTransitionTime":"2025-12-16T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.690020 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.690090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.690102 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.690117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.690129 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:50Z","lastTransitionTime":"2025-12-16T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.793554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.793622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.793635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.793652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.793663 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:50Z","lastTransitionTime":"2025-12-16T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.895463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.895495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.895503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.895515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:50 crc kubenswrapper[4805]: I1216 11:56:50.895532 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:50Z","lastTransitionTime":"2025-12-16T11:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.015230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.015292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.015309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.015329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.015341 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:51Z","lastTransitionTime":"2025-12-16T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.123491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.123548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.123560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.123579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.123595 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:51Z","lastTransitionTime":"2025-12-16T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.226466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.226499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.226509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.226523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.226533 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:51Z","lastTransitionTime":"2025-12-16T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.330103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.330186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.330202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.330218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.330228 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:51Z","lastTransitionTime":"2025-12-16T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.432550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.432591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.432600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.432621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.432631 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:51Z","lastTransitionTime":"2025-12-16T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.521679 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:51 crc kubenswrapper[4805]: E1216 11:56:51.521809 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.522024 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:51 crc kubenswrapper[4805]: E1216 11:56:51.522091 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.535528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.535554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.535562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.535575 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.535583 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:51Z","lastTransitionTime":"2025-12-16T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.637866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.638097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.638109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.638126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.638136 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:51Z","lastTransitionTime":"2025-12-16T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.744252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.744282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.744290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.744303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.744313 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:51Z","lastTransitionTime":"2025-12-16T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.847945 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.847989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.848000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.848019 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.848030 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:51Z","lastTransitionTime":"2025-12-16T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.950939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.950982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.950996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.951013 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.951024 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:51Z","lastTransitionTime":"2025-12-16T11:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.983603 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovnkube-controller/2.log" Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.986629 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerStarted","Data":"ae1de15ea708b8695292548dc5dd04046670fdd69f9d040e21ff16aeabfa912d"} Dec 16 11:56:51 crc kubenswrapper[4805]: I1216 11:56:51.987174 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.053651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.053715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.053725 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.053757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.053767 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:52Z","lastTransitionTime":"2025-12-16T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.118248 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.11818299 podStartE2EDuration="20.11818299s" podCreationTimestamp="2025-12-16 11:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.116743508 +0000 UTC m=+85.835001313" watchObservedRunningTime="2025-12-16 11:56:52.11818299 +0000 UTC m=+85.836440795" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.156039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.156312 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.156396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.156530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.156644 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:52Z","lastTransitionTime":"2025-12-16T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.258226 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-br4tl" podStartSLOduration=64.258209435 podStartE2EDuration="1m4.258209435s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.253637703 +0000 UTC m=+85.971895508" watchObservedRunningTime="2025-12-16 11:56:52.258209435 +0000 UTC m=+85.976467240" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.259515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.259540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.259551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.259566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.259577 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:52Z","lastTransitionTime":"2025-12-16T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.328637 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zl99g" podStartSLOduration=64.328618914 podStartE2EDuration="1m4.328618914s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.294503291 +0000 UTC m=+86.012761096" watchObservedRunningTime="2025-12-16 11:56:52.328618914 +0000 UTC m=+86.046876729" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.345129 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.34511268 podStartE2EDuration="1m5.34511268s" podCreationTimestamp="2025-12-16 11:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.328883662 +0000 UTC m=+86.047141467" watchObservedRunningTime="2025-12-16 11:56:52.34511268 +0000 UTC m=+86.063370495" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.361663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.361686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.361693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.361721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.361730 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:52Z","lastTransitionTime":"2025-12-16T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.390189 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qljjv" podStartSLOduration=64.390135577 podStartE2EDuration="1m4.390135577s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.389966633 +0000 UTC m=+86.108224438" watchObservedRunningTime="2025-12-16 11:56:52.390135577 +0000 UTC m=+86.108393392" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.422269 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podStartSLOduration=64.422252233 podStartE2EDuration="1m4.422252233s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.421877492 +0000 UTC m=+86.140135297" watchObservedRunningTime="2025-12-16 11:56:52.422252233 +0000 UTC m=+86.140510048" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.463469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.463547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.463562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.463585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.463616 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:52Z","lastTransitionTime":"2025-12-16T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.481924 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.481904472 podStartE2EDuration="1m8.481904472s" podCreationTimestamp="2025-12-16 11:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.464111549 +0000 UTC m=+86.182369364" watchObservedRunningTime="2025-12-16 11:56:52.481904472 +0000 UTC m=+86.200162297" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.519085 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-btjs7" podStartSLOduration=64.519068453 podStartE2EDuration="1m4.519068453s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.51825459 +0000 UTC m=+86.236512405" watchObservedRunningTime="2025-12-16 11:56:52.519068453 +0000 UTC m=+86.237326268" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.521744 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.521755 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:52 crc kubenswrapper[4805]: E1216 11:56:52.521932 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:52 crc kubenswrapper[4805]: E1216 11:56:52.522057 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.537300 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-vffbc" podStartSLOduration=64.537280568 podStartE2EDuration="1m4.537280568s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.536965749 +0000 UTC m=+86.255223734" watchObservedRunningTime="2025-12-16 11:56:52.537280568 +0000 UTC m=+86.255538393" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.555307 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.555292197 podStartE2EDuration="37.555292197s" podCreationTimestamp="2025-12-16 11:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.554262378 +0000 UTC m=+86.272520183" watchObservedRunningTime="2025-12-16 11:56:52.555292197 +0000 UTC m=+86.273550012" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.565974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.566009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.566017 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.566033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.566042 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:52Z","lastTransitionTime":"2025-12-16T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.600283 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=65.600261753 podStartE2EDuration="1m5.600261753s" podCreationTimestamp="2025-12-16 11:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.582690137 +0000 UTC m=+86.300947962" watchObservedRunningTime="2025-12-16 11:56:52.600261753 +0000 UTC m=+86.318519568" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.667641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.667679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.667689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.667703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.667711 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:52Z","lastTransitionTime":"2025-12-16T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.769935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.769973 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.769983 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.769997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.770014 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:52Z","lastTransitionTime":"2025-12-16T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.872535 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.872598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.872610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.872626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.872636 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:52Z","lastTransitionTime":"2025-12-16T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.877187 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podStartSLOduration=64.877165434 podStartE2EDuration="1m4.877165434s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:52.600821059 +0000 UTC m=+86.319078854" watchObservedRunningTime="2025-12-16 11:56:52.877165434 +0000 UTC m=+86.595423259" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.878477 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ct6d8"] Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.975830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.975886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.975902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.975927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.975943 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:52Z","lastTransitionTime":"2025-12-16T11:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:52 crc kubenswrapper[4805]: I1216 11:56:52.989289 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:52 crc kubenswrapper[4805]: E1216 11:56:52.989735 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.078829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.078875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.078886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.078902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.078913 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:53Z","lastTransitionTime":"2025-12-16T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.181258 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.181326 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.181337 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.181360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.181373 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:53Z","lastTransitionTime":"2025-12-16T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.283661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.283696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.283704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.283717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.283727 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:53Z","lastTransitionTime":"2025-12-16T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.385818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.385856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.385865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.385878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.385891 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:53Z","lastTransitionTime":"2025-12-16T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.488664 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.488695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.488703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.488715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.488724 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:53Z","lastTransitionTime":"2025-12-16T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.522058 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.522080 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:53 crc kubenswrapper[4805]: E1216 11:56:53.522216 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:53 crc kubenswrapper[4805]: E1216 11:56:53.522281 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.591473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.591512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.591521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.591537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.591547 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:53Z","lastTransitionTime":"2025-12-16T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.693778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.693806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.693814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.693836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.693845 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:53Z","lastTransitionTime":"2025-12-16T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.795397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.795442 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.795455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.795475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.795488 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:53Z","lastTransitionTime":"2025-12-16T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.898076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.898181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.898205 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.898236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:53 crc kubenswrapper[4805]: I1216 11:56:53.898258 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:53Z","lastTransitionTime":"2025-12-16T11:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.000584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.000631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.000653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.000669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.000681 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:54Z","lastTransitionTime":"2025-12-16T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.103847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.103896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.103912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.103930 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.103944 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:54Z","lastTransitionTime":"2025-12-16T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.206644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.206695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.206707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.206730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.206741 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:54Z","lastTransitionTime":"2025-12-16T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.309127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.309184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.309194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.309210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.309220 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:54Z","lastTransitionTime":"2025-12-16T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.411112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.411177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.411188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.411208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.411220 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:54Z","lastTransitionTime":"2025-12-16T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.513742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.514118 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.514254 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.514408 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.514538 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:54Z","lastTransitionTime":"2025-12-16T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.522311 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:54 crc kubenswrapper[4805]: E1216 11:56:54.522481 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.522774 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:54 crc kubenswrapper[4805]: E1216 11:56:54.523046 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ct6d8" podUID="2cc0dc93-5341-4b1e-840d-2c0c951ff142" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.617247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.617299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.617309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.617327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.617339 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:54Z","lastTransitionTime":"2025-12-16T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.721116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.721175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.721189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.721209 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.721220 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:54Z","lastTransitionTime":"2025-12-16T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.824708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.824756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.824774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.824823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.824841 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:54Z","lastTransitionTime":"2025-12-16T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.928838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.929184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.929267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.929342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:54 crc kubenswrapper[4805]: I1216 11:56:54.929398 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:54Z","lastTransitionTime":"2025-12-16T11:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.035771 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.035815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.035826 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.035844 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.035856 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:55Z","lastTransitionTime":"2025-12-16T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.138714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.138811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.138822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.138838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.138870 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:55Z","lastTransitionTime":"2025-12-16T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.241770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.241822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.241837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.241859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.241874 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:55Z","lastTransitionTime":"2025-12-16T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.343754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.343790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.343799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.343813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.343822 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:55Z","lastTransitionTime":"2025-12-16T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.446874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.446946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.446970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.447004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.447030 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T11:56:55Z","lastTransitionTime":"2025-12-16T11:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.522199 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.522271 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:55 crc kubenswrapper[4805]: E1216 11:56:55.522337 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 16 11:56:55 crc kubenswrapper[4805]: E1216 11:56:55.522420 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.549309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.549349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.549359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.549375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.549459 4805 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.592787 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4jnvf"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.593463 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.594481 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nq4vq"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.595298 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.595885 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.596721 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.599796 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.599956 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.600290 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.602757 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.602864 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.602864 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.602955 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.604743 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.604874 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.605083 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.605478 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.608034 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.609010 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.609508 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q7lmj"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.609998 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.610884 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.611272 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.612107 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.617009 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-dxhld"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.618109 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.622982 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.623546 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.623735 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.630107 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.630567 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.631361 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.631568 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.632178 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.632698 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.633983 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.634008 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.634136 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.634254 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.634340 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.634465 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.636779 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.641702 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.644914 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-28rcg"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.645192 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lzhxz"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.645461 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-prkth"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.645914 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.646484 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.649517 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.649870 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.650487 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.650729 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.650893 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.651019 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.651343 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.651455 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.651603 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.651716 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.651841 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.651941 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.652049 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.652209 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.655489 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.655886 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.657344 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.657502 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.657805 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.658946 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.659124 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.659298 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.664409 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwpbn"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.671997 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.672322 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.672849 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.673102 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.673256 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.673861 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.674885 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-s9j77"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.675343 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-s9j77" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.677854 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.678180 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.678293 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.680395 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.680773 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-r2dsr"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.680985 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.681380 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.681662 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.681925 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.687977 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.689790 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.690037 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.690229 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.690349 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.690509 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.690664 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.690775 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.690912 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.691021 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.691130 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.691275 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.691406 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.691550 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.691711 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.691828 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.691929 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.692026 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.692182 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.692379 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.692561 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.692719 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.692827 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.692926 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.693027 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.693132 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.693295 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.693455 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.693610 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.694103 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.706231 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.706286 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-prkth"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.711183 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.717589 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.728995 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.734354 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.742675 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dxhld"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.744030 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.744040 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.745036 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.745451 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.746499 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.747059 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.747450 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.762495 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.766460 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.767375 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769289 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769398 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769603 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5826956e-b8ea-4053-8987-ebe9f350c975-serving-cert\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769630 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/50e153c5-173b-4a28-a028-5ebf2ff22054-machine-approver-tls\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769650 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbccfd36-acd0-4302-befd-f032ebddb856-serving-cert\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769666 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-oauth-serving-cert\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769682 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-config\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769699 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dbccfd36-acd0-4302-befd-f032ebddb856-etcd-client\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769727 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5826956e-b8ea-4053-8987-ebe9f350c975-audit-dir\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769743 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5826956e-b8ea-4053-8987-ebe9f350c975-encryption-config\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769767 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t7bj\" (UniqueName: \"kubernetes.io/projected/7f262dc5-9bae-450c-ab81-3172ba82e700-kube-api-access-7t7bj\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769783 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2c76\" (UniqueName: \"kubernetes.io/projected/9a656193-7884-4e3d-8a17-4ff680c4a116-kube-api-access-q2c76\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769807 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5826956e-b8ea-4053-8987-ebe9f350c975-node-pullsecrets\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769825 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dbccfd36-acd0-4302-befd-f032ebddb856-audit-policies\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769844 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a0ad740-271d-452e-b8be-d5e190a46719-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769871 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-config\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769896 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-serving-cert\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769913 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-config\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769942 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-audit\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769961 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-serving-cert\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.769982 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrpjr\" (UniqueName: \"kubernetes.io/projected/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-kube-api-access-hrpjr\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.770019 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a656193-7884-4e3d-8a17-4ff680c4a116-config\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.770036 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50e153c5-173b-4a28-a028-5ebf2ff22054-config\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.770052 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-config\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.770068 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dbccfd36-acd0-4302-befd-f032ebddb856-encryption-config\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.770085 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbccfd36-acd0-4302-befd-f032ebddb856-audit-dir\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.770108 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.770332 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4jnvf"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.770112 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-client-ca\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.771851 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4dds\" (UniqueName: \"kubernetes.io/projected/2377e2e1-2de4-46e9-a0a9-768f5dd52317-kube-api-access-v4dds\") pod \"openshift-config-operator-7777fb866f-prkth\" (UID: \"2377e2e1-2de4-46e9-a0a9-768f5dd52317\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.771878 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50e153c5-173b-4a28-a028-5ebf2ff22054-auth-proxy-config\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.771895 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-service-ca\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.771918 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmjb\" (UniqueName: \"kubernetes.io/projected/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-kube-api-access-brmjb\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.771927 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772075 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.771933 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2377e2e1-2de4-46e9-a0a9-768f5dd52317-serving-cert\") pod \"openshift-config-operator-7777fb866f-prkth\" (UID: \"2377e2e1-2de4-46e9-a0a9-768f5dd52317\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a656193-7884-4e3d-8a17-4ff680c4a116-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772496 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a0ad740-271d-452e-b8be-d5e190a46719-service-ca-bundle\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772527 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7ndv\" (UniqueName: \"kubernetes.io/projected/5826956e-b8ea-4053-8987-ebe9f350c975-kube-api-access-p7ndv\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772545 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-serving-cert\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772562 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a0ad740-271d-452e-b8be-d5e190a46719-config\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772587 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-config\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772604 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-trusted-ca-bundle\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772620 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-image-import-ca\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772635 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbccfd36-acd0-4302-befd-f032ebddb856-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772652 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2377e2e1-2de4-46e9-a0a9-768f5dd52317-available-featuregates\") pod \"openshift-config-operator-7777fb866f-prkth\" (UID: \"2377e2e1-2de4-46e9-a0a9-768f5dd52317\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772669 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9eab42-29fd-4d17-8c0d-57eb48913cb7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6xf\" (UID: \"0b9eab42-29fd-4d17-8c0d-57eb48913cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772687 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dbccfd36-acd0-4302-befd-f032ebddb856-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772704 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bdjx\" (UniqueName: \"kubernetes.io/projected/dbccfd36-acd0-4302-befd-f032ebddb856-kube-api-access-6bdjx\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772719 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a0ad740-271d-452e-b8be-d5e190a46719-serving-cert\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772736 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-oauth-config\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772752 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f262dc5-9bae-450c-ab81-3172ba82e700-serving-cert\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772766 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hqns\" (UniqueName: \"kubernetes.io/projected/0b9eab42-29fd-4d17-8c0d-57eb48913cb7-kube-api-access-9hqns\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6xf\" (UID: \"0b9eab42-29fd-4d17-8c0d-57eb48913cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772785 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772824 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-client-ca\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772857 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9dlq\" (UniqueName: \"kubernetes.io/projected/5a0ad740-271d-452e-b8be-d5e190a46719-kube-api-access-g9dlq\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772872 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s28js\" (UniqueName: \"kubernetes.io/projected/50e153c5-173b-4a28-a028-5ebf2ff22054-kube-api-access-s28js\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772888 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76cw7\" (UniqueName: \"kubernetes.io/projected/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-kube-api-access-76cw7\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772905 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-etcd-serving-ca\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a656193-7884-4e3d-8a17-4ff680c4a116-images\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772940 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-trusted-ca\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772954 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5826956e-b8ea-4053-8987-ebe9f350c975-etcd-client\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9eab42-29fd-4d17-8c0d-57eb48913cb7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6xf\" (UID: \"0b9eab42-29fd-4d17-8c0d-57eb48913cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.772984 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.773345 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.773393 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sb7p7"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.773818 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.774194 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.774467 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.777410 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.782356 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.782872 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.785543 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.785729 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.786877 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.787511 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.790075 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.793495 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.785230 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.794450 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.794917 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-j4rx5"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.795202 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-s9j77"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.795271 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.795467 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.798671 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.800631 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.800726 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.801468 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.802083 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.802586 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.807069 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.808309 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.809255 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.809256 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.809884 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.812786 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nq4vq"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.815567 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.815710 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sjff7"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.816680 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.817370 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.836651 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.845668 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.845902 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.856756 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.860111 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x9qqr"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.860248 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.861392 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kkf49"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.862128 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.873838 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-config\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.873882 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dbccfd36-acd0-4302-befd-f032ebddb856-encryption-config\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.873928 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbccfd36-acd0-4302-befd-f032ebddb856-audit-dir\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.873947 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50e153c5-173b-4a28-a028-5ebf2ff22054-config\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.873966 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-client-ca\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.873988 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4dds\" (UniqueName: \"kubernetes.io/projected/2377e2e1-2de4-46e9-a0a9-768f5dd52317-kube-api-access-v4dds\") pod \"openshift-config-operator-7777fb866f-prkth\" (UID: \"2377e2e1-2de4-46e9-a0a9-768f5dd52317\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874009 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae46fc1b-666a-498f-92f4-673cb535530e-etcd-service-ca\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874033 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brmjb\" (UniqueName: \"kubernetes.io/projected/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-kube-api-access-brmjb\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874050 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50e153c5-173b-4a28-a028-5ebf2ff22054-auth-proxy-config\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874069 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-service-ca\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a6697a04-3efc-4511-bd87-42630448efe0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874107 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7893fbf-4354-4fc6-bc2a-fdadbdee1311-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zbcnq\" (UID: \"e7893fbf-4354-4fc6-bc2a-fdadbdee1311\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874128 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a656193-7884-4e3d-8a17-4ff680c4a116-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874166 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a0ad740-271d-452e-b8be-d5e190a46719-service-ca-bundle\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874185 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2377e2e1-2de4-46e9-a0a9-768f5dd52317-serving-cert\") pod \"openshift-config-operator-7777fb866f-prkth\" (UID: \"2377e2e1-2de4-46e9-a0a9-768f5dd52317\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874202 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae46fc1b-666a-498f-92f4-673cb535530e-etcd-client\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874225 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ndv\" (UniqueName: \"kubernetes.io/projected/5826956e-b8ea-4053-8987-ebe9f350c975-kube-api-access-p7ndv\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874244 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-serving-cert\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874263 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcc6r\" (UniqueName: \"kubernetes.io/projected/be38a4fa-fb1e-4f8d-9887-b4232ed7c087-kube-api-access-qcc6r\") pod \"openshift-controller-manager-operator-756b6f6bc6-qpz4c\" (UID: \"be38a4fa-fb1e-4f8d-9887-b4232ed7c087\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874281 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874315 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-config\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874335 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-trusted-ca-bundle\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874350 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a0ad740-271d-452e-b8be-d5e190a46719-config\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874368 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-image-import-ca\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874404 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbccfd36-acd0-4302-befd-f032ebddb856-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.874419 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2377e2e1-2de4-46e9-a0a9-768f5dd52317-available-featuregates\") pod \"openshift-config-operator-7777fb866f-prkth\" (UID: \"2377e2e1-2de4-46e9-a0a9-768f5dd52317\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.887373 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50e153c5-173b-4a28-a028-5ebf2ff22054-auth-proxy-config\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.887532 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-client-ca\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.887808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-service-ca\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.887919 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-config\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.888576 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbccfd36-acd0-4302-befd-f032ebddb856-audit-dir\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.892388 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.894029 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.894118 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.895494 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a0ad740-271d-452e-b8be-d5e190a46719-config\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.895645 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.896248 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a656193-7884-4e3d-8a17-4ff680c4a116-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.898246 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a0ad740-271d-452e-b8be-d5e190a46719-service-ca-bundle\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.893178 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9eab42-29fd-4d17-8c0d-57eb48913cb7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6xf\" (UID: \"0b9eab42-29fd-4d17-8c0d-57eb48913cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.898623 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba-config\") pod \"kube-controller-manager-operator-78b949d7b-qm6md\" (UID: \"7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.898665 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dbccfd36-acd0-4302-befd-f032ebddb856-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.898713 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bdjx\" (UniqueName: \"kubernetes.io/projected/dbccfd36-acd0-4302-befd-f032ebddb856-kube-api-access-6bdjx\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.898738 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832ba633-f44c-4aa1-8791-65656ed2a744-audit-dir\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.898764 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a0ad740-271d-452e-b8be-d5e190a46719-serving-cert\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.898809 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f262dc5-9bae-450c-ab81-3172ba82e700-serving-cert\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.898832 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hqns\" (UniqueName: \"kubernetes.io/projected/0b9eab42-29fd-4d17-8c0d-57eb48913cb7-kube-api-access-9hqns\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6xf\" (UID: \"0b9eab42-29fd-4d17-8c0d-57eb48913cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.898852 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-oauth-config\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.898924 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qm6md\" (UID: \"7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.898973 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899013 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae46fc1b-666a-498f-92f4-673cb535530e-etcd-ca\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899050 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6697a04-3efc-4511-bd87-42630448efe0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899078 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-client-ca\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899117 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9dlq\" (UniqueName: \"kubernetes.io/projected/5a0ad740-271d-452e-b8be-d5e190a46719-kube-api-access-g9dlq\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899161 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s28js\" (UniqueName: \"kubernetes.io/projected/50e153c5-173b-4a28-a028-5ebf2ff22054-kube-api-access-s28js\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899183 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76cw7\" (UniqueName: \"kubernetes.io/projected/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-kube-api-access-76cw7\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899206 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae46fc1b-666a-498f-92f4-673cb535530e-config\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899283 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-audit-policies\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899306 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899359 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-etcd-serving-ca\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a656193-7884-4e3d-8a17-4ff680c4a116-images\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899480 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-trusted-ca\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899522 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899546 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899572 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899629 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5826956e-b8ea-4053-8987-ebe9f350c975-etcd-client\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899703 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9eab42-29fd-4d17-8c0d-57eb48913cb7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6xf\" (UID: \"0b9eab42-29fd-4d17-8c0d-57eb48913cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899729 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899788 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5826956e-b8ea-4053-8987-ebe9f350c975-serving-cert\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899849 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/50e153c5-173b-4a28-a028-5ebf2ff22054-machine-approver-tls\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899882 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dbccfd36-acd0-4302-befd-f032ebddb856-etcd-client\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899924 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbccfd36-acd0-4302-befd-f032ebddb856-serving-cert\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899946 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-oauth-serving-cert\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899968 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-config\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900014 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a6697a04-3efc-4511-bd87-42630448efe0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900038 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6697a04-3efc-4511-bd87-42630448efe0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900080 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg8j4\" (UniqueName: \"kubernetes.io/projected/e7893fbf-4354-4fc6-bc2a-fdadbdee1311-kube-api-access-lg8j4\") pod \"cluster-samples-operator-665b6dd947-zbcnq\" (UID: \"e7893fbf-4354-4fc6-bc2a-fdadbdee1311\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900105 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qm6md\" (UID: \"7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900127 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900178 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900208 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2377e2e1-2de4-46e9-a0a9-768f5dd52317-available-featuregates\") pod \"openshift-config-operator-7777fb866f-prkth\" (UID: \"2377e2e1-2de4-46e9-a0a9-768f5dd52317\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900254 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5826956e-b8ea-4053-8987-ebe9f350c975-audit-dir\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900277 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx2vz\" (UniqueName: \"kubernetes.io/projected/ae46fc1b-666a-498f-92f4-673cb535530e-kube-api-access-xx2vz\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900294 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be38a4fa-fb1e-4f8d-9887-b4232ed7c087-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qpz4c\" (UID: \"be38a4fa-fb1e-4f8d-9887-b4232ed7c087\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900434 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900439 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5826956e-b8ea-4053-8987-ebe9f350c975-encryption-config\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.899925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dbccfd36-acd0-4302-befd-f032ebddb856-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.900716 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2377e2e1-2de4-46e9-a0a9-768f5dd52317-serving-cert\") pod \"openshift-config-operator-7777fb866f-prkth\" (UID: \"2377e2e1-2de4-46e9-a0a9-768f5dd52317\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.901536 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-config\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.901673 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-trusted-ca-bundle\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.901711 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.902837 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-image-import-ca\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.902983 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9eab42-29fd-4d17-8c0d-57eb48913cb7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6xf\" (UID: \"0b9eab42-29fd-4d17-8c0d-57eb48913cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903119 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903379 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9djnw\" (UniqueName: \"kubernetes.io/projected/184529ec-0250-4ed3-a1ef-4a5606202a85-kube-api-access-9djnw\") pod \"downloads-7954f5f757-s9j77\" (UID: \"184529ec-0250-4ed3-a1ef-4a5606202a85\") " pod="openshift-console/downloads-7954f5f757-s9j77" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903419 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t7bj\" (UniqueName: \"kubernetes.io/projected/7f262dc5-9bae-450c-ab81-3172ba82e700-kube-api-access-7t7bj\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903484 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903516 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2c76\" (UniqueName: \"kubernetes.io/projected/9a656193-7884-4e3d-8a17-4ff680c4a116-kube-api-access-q2c76\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903538 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5826956e-b8ea-4053-8987-ebe9f350c975-node-pullsecrets\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903571 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dbccfd36-acd0-4302-befd-f032ebddb856-audit-policies\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903592 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a0ad740-271d-452e-b8be-d5e190a46719-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903614 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-config\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903637 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-serving-cert\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903659 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-config\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903673 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-audit\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.903690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6697a04-3efc-4511-bd87-42630448efe0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.904517 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5826956e-b8ea-4053-8987-ebe9f350c975-node-pullsecrets\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.904632 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-etcd-serving-ca\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.904969 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dbccfd36-acd0-4302-befd-f032ebddb856-audit-policies\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.905001 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a656193-7884-4e3d-8a17-4ff680c4a116-images\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.895381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbccfd36-acd0-4302-befd-f032ebddb856-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.905824 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f262dc5-9bae-450c-ab81-3172ba82e700-serving-cert\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.906022 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-config\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.906524 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-oauth-serving-cert\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.906743 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-config\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.906797 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-trusted-ca\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.906875 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tkhnx"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.907364 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9k8h7"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.907701 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.907717 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.907986 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a0ad740-271d-452e-b8be-d5e190a46719-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.908058 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lzhxz"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.908073 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.908167 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-client-ca\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.908367 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dbccfd36-acd0-4302-befd-f032ebddb856-encryption-config\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.908647 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.908905 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5826956e-b8ea-4053-8987-ebe9f350c975-audit\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.909189 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50e153c5-173b-4a28-a028-5ebf2ff22054-config\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.909590 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-config\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.909680 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-serving-cert\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.909713 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be38a4fa-fb1e-4f8d-9887-b4232ed7c087-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qpz4c\" (UID: \"be38a4fa-fb1e-4f8d-9887-b4232ed7c087\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.909735 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.910538 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6ndp4"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.910564 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-serving-cert\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.910732 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5826956e-b8ea-4053-8987-ebe9f350c975-encryption-config\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.911757 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrpjr\" (UniqueName: \"kubernetes.io/projected/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-kube-api-access-hrpjr\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.911781 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae46fc1b-666a-498f-92f4-673cb535530e-serving-cert\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.911799 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwwz\" (UniqueName: \"kubernetes.io/projected/832ba633-f44c-4aa1-8791-65656ed2a744-kube-api-access-bcwwz\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.911835 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a656193-7884-4e3d-8a17-4ff680c4a116-config\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.912557 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a656193-7884-4e3d-8a17-4ff680c4a116-config\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.913315 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5826956e-b8ea-4053-8987-ebe9f350c975-audit-dir\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.913649 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.914241 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-oauth-config\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.914301 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-r2dsr"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.914358 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.914383 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.914398 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.914416 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tkhnx" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.914436 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.914458 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.915335 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a0ad740-271d-452e-b8be-d5e190a46719-serving-cert\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.915446 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/50e153c5-173b-4a28-a028-5ebf2ff22054-machine-approver-tls\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.915682 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9eab42-29fd-4d17-8c0d-57eb48913cb7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6xf\" (UID: \"0b9eab42-29fd-4d17-8c0d-57eb48913cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.915969 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-serving-cert\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.916248 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.916688 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dbccfd36-acd0-4302-befd-f032ebddb856-etcd-client\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.916726 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q7lmj"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.917023 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbccfd36-acd0-4302-befd-f032ebddb856-serving-cert\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.917361 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5826956e-b8ea-4053-8987-ebe9f350c975-serving-cert\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.918250 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwpbn"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.919376 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2lp7s"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.920187 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2lp7s" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.920563 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-z69nw"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.921340 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z69nw" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.922654 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sb7p7"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.923670 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-28rcg"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.927516 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-serving-cert\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.934546 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5826956e-b8ea-4053-8987-ebe9f350c975-etcd-client\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.948869 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.956020 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.957235 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.957845 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.965639 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.968029 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kkf49"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.969309 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.976694 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.976885 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.977344 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.978509 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.979961 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sjff7"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.981392 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6ndp4"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.984280 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.985609 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.986431 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.988126 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.992059 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z69nw"] Dec 16 11:56:55 crc kubenswrapper[4805]: I1216 11:56:55.996447 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012443 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/424e23bf-3841-48b3-80c7-b1d73d8110c7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmcvz\" (UniqueName: \"kubernetes.io/projected/a176db58-5c9e-4446-bccf-01eabe6612d2-kube-api-access-gmcvz\") pod \"kube-storage-version-migrator-operator-b67b599dd-2c94r\" (UID: \"a176db58-5c9e-4446-bccf-01eabe6612d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012507 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/424e23bf-3841-48b3-80c7-b1d73d8110c7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012524 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/424e23bf-3841-48b3-80c7-b1d73d8110c7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012543 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfvr2\" (UniqueName: \"kubernetes.io/projected/eeb94cb2-1f1c-4547-93ab-895959638a88-kube-api-access-tfvr2\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012563 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6697a04-3efc-4511-bd87-42630448efe0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012579 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-csi-data-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a700da85-9bcf-4223-aaed-e9baa489beab-trusted-ca\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae46fc1b-666a-498f-92f4-673cb535530e-serving-cert\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012637 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcwwz\" (UniqueName: \"kubernetes.io/projected/832ba633-f44c-4aa1-8791-65656ed2a744-kube-api-access-bcwwz\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012654 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aaf86ef4-c14b-41b6-bd97-92a701b82562-proxy-tls\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012678 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d60eaf3-b804-48a1-bc67-710f8ce5805f-metrics-certs\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012707 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7nbw\" (UniqueName: \"kubernetes.io/projected/1d60eaf3-b804-48a1-bc67-710f8ce5805f-kube-api-access-k7nbw\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012740 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a176db58-5c9e-4446-bccf-01eabe6612d2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2c94r\" (UID: \"a176db58-5c9e-4446-bccf-01eabe6612d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012757 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2558d5-6ce0-4fb0-b689-fc8682a89744-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jccm\" (UID: \"df2558d5-6ce0-4fb0-b689-fc8682a89744\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcc6r\" (UniqueName: \"kubernetes.io/projected/be38a4fa-fb1e-4f8d-9887-b4232ed7c087-kube-api-access-qcc6r\") pod \"openshift-controller-manager-operator-756b6f6bc6-qpz4c\" (UID: \"be38a4fa-fb1e-4f8d-9887-b4232ed7c087\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012806 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012822 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a700da85-9bcf-4223-aaed-e9baa489beab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012839 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba-config\") pod \"kube-controller-manager-operator-78b949d7b-qm6md\" (UID: \"7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012861 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-registration-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012878 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qm6md\" (UID: \"7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012895 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-mountpoint-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55e84bc6-30ab-46a1-8535-e9fa83eaaa5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kkf49\" (UID: \"55e84bc6-30ab-46a1-8535-e9fa83eaaa5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012941 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012956 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-plugins-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.012977 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06647187-e41a-4641-9fab-7604122b8f0a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9lfcx\" (UID: \"06647187-e41a-4641-9fab-7604122b8f0a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.013000 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-socket-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.013029 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a6697a04-3efc-4511-bd87-42630448efe0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015170 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qm6md\" (UID: \"7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.013217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a6697a04-3efc-4511-bd87-42630448efe0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.014609 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6697a04-3efc-4511-bd87-42630448efe0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015608 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9djnw\" (UniqueName: \"kubernetes.io/projected/184529ec-0250-4ed3-a1ef-4a5606202a85-kube-api-access-9djnw\") pod \"downloads-7954f5f757-s9j77\" (UID: \"184529ec-0250-4ed3-a1ef-4a5606202a85\") " pod="openshift-console/downloads-7954f5f757-s9j77" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015649 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015691 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d60eaf3-b804-48a1-bc67-710f8ce5805f-default-certificate\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015713 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aaf86ef4-c14b-41b6-bd97-92a701b82562-images\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015736 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be38a4fa-fb1e-4f8d-9887-b4232ed7c087-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qpz4c\" (UID: \"be38a4fa-fb1e-4f8d-9887-b4232ed7c087\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015758 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015791 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06647187-e41a-4641-9fab-7604122b8f0a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9lfcx\" (UID: \"06647187-e41a-4641-9fab-7604122b8f0a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015826 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae46fc1b-666a-498f-92f4-673cb535530e-etcd-service-ca\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015857 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf2vf\" (UniqueName: \"kubernetes.io/projected/a700da85-9bcf-4223-aaed-e9baa489beab-kube-api-access-hf2vf\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015880 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a6697a04-3efc-4511-bd87-42630448efe0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015904 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7893fbf-4354-4fc6-bc2a-fdadbdee1311-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zbcnq\" (UID: \"e7893fbf-4354-4fc6-bc2a-fdadbdee1311\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015931 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae46fc1b-666a-498f-92f4-673cb535530e-etcd-client\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.015997 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4459\" (UniqueName: \"kubernetes.io/projected/aaf86ef4-c14b-41b6-bd97-92a701b82562-kube-api-access-r4459\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016022 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d60eaf3-b804-48a1-bc67-710f8ce5805f-stats-auth\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016056 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832ba633-f44c-4aa1-8791-65656ed2a744-audit-dir\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016081 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aaf86ef4-c14b-41b6-bd97-92a701b82562-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016102 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a176db58-5c9e-4446-bccf-01eabe6612d2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2c94r\" (UID: \"a176db58-5c9e-4446-bccf-01eabe6612d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016128 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d60eaf3-b804-48a1-bc67-710f8ce5805f-service-ca-bundle\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016197 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae46fc1b-666a-498f-92f4-673cb535530e-etcd-ca\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016222 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6697a04-3efc-4511-bd87-42630448efe0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016248 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a700da85-9bcf-4223-aaed-e9baa489beab-metrics-tls\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016308 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae46fc1b-666a-498f-92f4-673cb535530e-config\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-audit-policies\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016371 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016397 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016457 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94pw5\" (UniqueName: \"kubernetes.io/projected/424e23bf-3841-48b3-80c7-b1d73d8110c7-kube-api-access-94pw5\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2db5\" (UniqueName: \"kubernetes.io/projected/55e84bc6-30ab-46a1-8535-e9fa83eaaa5b-kube-api-access-r2db5\") pod \"multus-admission-controller-857f4d67dd-kkf49\" (UID: \"55e84bc6-30ab-46a1-8535-e9fa83eaaa5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016531 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016553 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06647187-e41a-4641-9fab-7604122b8f0a-config\") pod \"kube-apiserver-operator-766d6c64bb-9lfcx\" (UID: \"06647187-e41a-4641-9fab-7604122b8f0a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016581 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x62j\" (UniqueName: \"kubernetes.io/projected/df2558d5-6ce0-4fb0-b689-fc8682a89744-kube-api-access-2x62j\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jccm\" (UID: \"df2558d5-6ce0-4fb0-b689-fc8682a89744\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016630 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6697a04-3efc-4511-bd87-42630448efe0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg8j4\" (UniqueName: \"kubernetes.io/projected/e7893fbf-4354-4fc6-bc2a-fdadbdee1311-kube-api-access-lg8j4\") pod \"cluster-samples-operator-665b6dd947-zbcnq\" (UID: \"e7893fbf-4354-4fc6-bc2a-fdadbdee1311\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016681 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016694 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae46fc1b-666a-498f-92f4-673cb535530e-etcd-service-ca\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016704 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx2vz\" (UniqueName: \"kubernetes.io/projected/ae46fc1b-666a-498f-92f4-673cb535530e-kube-api-access-xx2vz\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.016786 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a6697a04-3efc-4511-bd87-42630448efe0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.017362 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.018613 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.019028 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ae46fc1b-666a-498f-92f4-673cb535530e-etcd-ca\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.019792 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae46fc1b-666a-498f-92f4-673cb535530e-config\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.020200 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-audit-policies\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.023919 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn"] Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.027809 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.029119 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.029207 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be38a4fa-fb1e-4f8d-9887-b4232ed7c087-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qpz4c\" (UID: \"be38a4fa-fb1e-4f8d-9887-b4232ed7c087\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.029262 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be38a4fa-fb1e-4f8d-9887-b4232ed7c087-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qpz4c\" (UID: \"be38a4fa-fb1e-4f8d-9887-b4232ed7c087\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.029442 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832ba633-f44c-4aa1-8791-65656ed2a744-audit-dir\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.041438 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.041633 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7893fbf-4354-4fc6-bc2a-fdadbdee1311-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zbcnq\" (UID: \"e7893fbf-4354-4fc6-bc2a-fdadbdee1311\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.042073 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.042207 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be38a4fa-fb1e-4f8d-9887-b4232ed7c087-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qpz4c\" (UID: \"be38a4fa-fb1e-4f8d-9887-b4232ed7c087\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.042412 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae46fc1b-666a-498f-92f4-673cb535530e-etcd-client\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.042541 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba-config\") pod \"kube-controller-manager-operator-78b949d7b-qm6md\" (UID: \"7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.043127 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.043192 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.043540 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.043776 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.043978 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.045489 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae46fc1b-666a-498f-92f4-673cb535530e-serving-cert\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.045933 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.046422 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6697a04-3efc-4511-bd87-42630448efe0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.052960 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qm6md\" (UID: \"7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.053046 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j"] Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.054889 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.057425 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.061253 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2lp7s"] Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.062314 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m"] Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.064077 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n"] Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.066159 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm"] Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.069112 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9k8h7"] Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.070441 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9"] Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.072595 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x9qqr"] Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.075600 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.097422 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.115356 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.129941 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x62j\" (UniqueName: \"kubernetes.io/projected/df2558d5-6ce0-4fb0-b689-fc8682a89744-kube-api-access-2x62j\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jccm\" (UID: \"df2558d5-6ce0-4fb0-b689-fc8682a89744\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130011 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/424e23bf-3841-48b3-80c7-b1d73d8110c7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130038 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmcvz\" (UniqueName: \"kubernetes.io/projected/a176db58-5c9e-4446-bccf-01eabe6612d2-kube-api-access-gmcvz\") pod \"kube-storage-version-migrator-operator-b67b599dd-2c94r\" (UID: \"a176db58-5c9e-4446-bccf-01eabe6612d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130065 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/424e23bf-3841-48b3-80c7-b1d73d8110c7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130089 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/424e23bf-3841-48b3-80c7-b1d73d8110c7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130116 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfvr2\" (UniqueName: \"kubernetes.io/projected/eeb94cb2-1f1c-4547-93ab-895959638a88-kube-api-access-tfvr2\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130160 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-csi-data-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130183 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a700da85-9bcf-4223-aaed-e9baa489beab-trusted-ca\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130232 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aaf86ef4-c14b-41b6-bd97-92a701b82562-proxy-tls\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130269 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d60eaf3-b804-48a1-bc67-710f8ce5805f-metrics-certs\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130292 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7nbw\" (UniqueName: \"kubernetes.io/projected/1d60eaf3-b804-48a1-bc67-710f8ce5805f-kube-api-access-k7nbw\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130323 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a176db58-5c9e-4446-bccf-01eabe6612d2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2c94r\" (UID: \"a176db58-5c9e-4446-bccf-01eabe6612d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130348 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2558d5-6ce0-4fb0-b689-fc8682a89744-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jccm\" (UID: \"df2558d5-6ce0-4fb0-b689-fc8682a89744\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130393 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a700da85-9bcf-4223-aaed-e9baa489beab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130432 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-registration-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130455 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-mountpoint-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130496 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55e84bc6-30ab-46a1-8535-e9fa83eaaa5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kkf49\" (UID: \"55e84bc6-30ab-46a1-8535-e9fa83eaaa5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130522 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-plugins-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130547 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06647187-e41a-4641-9fab-7604122b8f0a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9lfcx\" (UID: \"06647187-e41a-4641-9fab-7604122b8f0a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130563 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-csi-data-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130586 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-socket-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130693 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d60eaf3-b804-48a1-bc67-710f8ce5805f-default-certificate\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130715 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aaf86ef4-c14b-41b6-bd97-92a701b82562-images\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130724 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-mountpoint-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130765 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06647187-e41a-4641-9fab-7604122b8f0a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9lfcx\" (UID: \"06647187-e41a-4641-9fab-7604122b8f0a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130796 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf2vf\" (UniqueName: \"kubernetes.io/projected/a700da85-9bcf-4223-aaed-e9baa489beab-kube-api-access-hf2vf\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130824 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-registration-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130838 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-socket-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130857 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4459\" (UniqueName: \"kubernetes.io/projected/aaf86ef4-c14b-41b6-bd97-92a701b82562-kube-api-access-r4459\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130875 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d60eaf3-b804-48a1-bc67-710f8ce5805f-stats-auth\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130884 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/eeb94cb2-1f1c-4547-93ab-895959638a88-plugins-dir\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130894 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aaf86ef4-c14b-41b6-bd97-92a701b82562-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a176db58-5c9e-4446-bccf-01eabe6612d2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2c94r\" (UID: \"a176db58-5c9e-4446-bccf-01eabe6612d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.130945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d60eaf3-b804-48a1-bc67-710f8ce5805f-service-ca-bundle\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.131004 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a700da85-9bcf-4223-aaed-e9baa489beab-metrics-tls\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.131038 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94pw5\" (UniqueName: \"kubernetes.io/projected/424e23bf-3841-48b3-80c7-b1d73d8110c7-kube-api-access-94pw5\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.131054 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2db5\" (UniqueName: \"kubernetes.io/projected/55e84bc6-30ab-46a1-8535-e9fa83eaaa5b-kube-api-access-r2db5\") pod \"multus-admission-controller-857f4d67dd-kkf49\" (UID: \"55e84bc6-30ab-46a1-8535-e9fa83eaaa5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.131089 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06647187-e41a-4641-9fab-7604122b8f0a-config\") pod \"kube-apiserver-operator-766d6c64bb-9lfcx\" (UID: \"06647187-e41a-4641-9fab-7604122b8f0a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.131365 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/424e23bf-3841-48b3-80c7-b1d73d8110c7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.131833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aaf86ef4-c14b-41b6-bd97-92a701b82562-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.131873 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06647187-e41a-4641-9fab-7604122b8f0a-config\") pod \"kube-apiserver-operator-766d6c64bb-9lfcx\" (UID: \"06647187-e41a-4641-9fab-7604122b8f0a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.134275 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06647187-e41a-4641-9fab-7604122b8f0a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9lfcx\" (UID: \"06647187-e41a-4641-9fab-7604122b8f0a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.136847 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.156395 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.163183 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/424e23bf-3841-48b3-80c7-b1d73d8110c7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.175405 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.215415 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.235761 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.255979 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.277671 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.296134 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.311924 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d60eaf3-b804-48a1-bc67-710f8ce5805f-stats-auth\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.316481 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.326780 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d60eaf3-b804-48a1-bc67-710f8ce5805f-metrics-certs\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.336574 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.355499 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.363959 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d60eaf3-b804-48a1-bc67-710f8ce5805f-default-certificate\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.376203 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.382199 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d60eaf3-b804-48a1-bc67-710f8ce5805f-service-ca-bundle\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.396578 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.416976 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.437023 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.445059 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a176db58-5c9e-4446-bccf-01eabe6612d2-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2c94r\" (UID: \"a176db58-5c9e-4446-bccf-01eabe6612d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.456158 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.476407 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.482503 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a176db58-5c9e-4446-bccf-01eabe6612d2-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2c94r\" (UID: \"a176db58-5c9e-4446-bccf-01eabe6612d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.496382 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.516080 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.522258 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.522732 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.524152 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2558d5-6ce0-4fb0-b689-fc8682a89744-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jccm\" (UID: \"df2558d5-6ce0-4fb0-b689-fc8682a89744\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.536367 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.556078 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.576394 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.596663 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.616161 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.622647 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aaf86ef4-c14b-41b6-bd97-92a701b82562-images\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.635702 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.656179 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.665363 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aaf86ef4-c14b-41b6-bd97-92a701b82562-proxy-tls\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.676666 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.695961 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.716477 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.737664 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.755858 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.776500 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.797095 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.815528 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.834636 4805 request.go:700] Waited for 1.016789964s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0 Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.841611 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.855850 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.876285 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.895896 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.917310 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.928123 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a700da85-9bcf-4223-aaed-e9baa489beab-metrics-tls\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.948336 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.955013 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a700da85-9bcf-4223-aaed-e9baa489beab-trusted-ca\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.956097 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.975732 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 16 11:56:56 crc kubenswrapper[4805]: I1216 11:56:56.996043 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.031843 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4dds\" (UniqueName: \"kubernetes.io/projected/2377e2e1-2de4-46e9-a0a9-768f5dd52317-kube-api-access-v4dds\") pod \"openshift-config-operator-7777fb866f-prkth\" (UID: \"2377e2e1-2de4-46e9-a0a9-768f5dd52317\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.050911 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brmjb\" (UniqueName: \"kubernetes.io/projected/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-kube-api-access-brmjb\") pod \"route-controller-manager-6576b87f9c-jt4tv\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.070573 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7ndv\" (UniqueName: \"kubernetes.io/projected/5826956e-b8ea-4053-8987-ebe9f350c975-kube-api-access-p7ndv\") pod \"apiserver-76f77b778f-4jnvf\" (UID: \"5826956e-b8ea-4053-8987-ebe9f350c975\") " pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.076990 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.110250 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bdjx\" (UniqueName: \"kubernetes.io/projected/dbccfd36-acd0-4302-befd-f032ebddb856-kube-api-access-6bdjx\") pod \"apiserver-7bbb656c7d-hgj4j\" (UID: \"dbccfd36-acd0-4302-befd-f032ebddb856\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.126494 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:56:57 crc kubenswrapper[4805]: E1216 11:56:57.131160 4805 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Dec 16 11:56:57 crc kubenswrapper[4805]: E1216 11:56:57.131345 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55e84bc6-30ab-46a1-8535-e9fa83eaaa5b-webhook-certs podName:55e84bc6-30ab-46a1-8535-e9fa83eaaa5b nodeName:}" failed. No retries permitted until 2025-12-16 11:56:57.631321189 +0000 UTC m=+91.349578994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55e84bc6-30ab-46a1-8535-e9fa83eaaa5b-webhook-certs") pod "multus-admission-controller-857f4d67dd-kkf49" (UID: "55e84bc6-30ab-46a1-8535-e9fa83eaaa5b") : failed to sync secret cache: timed out waiting for the condition Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.137241 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hqns\" (UniqueName: \"kubernetes.io/projected/0b9eab42-29fd-4d17-8c0d-57eb48913cb7-kube-api-access-9hqns\") pod \"openshift-apiserver-operator-796bbdcf4f-5v6xf\" (UID: \"0b9eab42-29fd-4d17-8c0d-57eb48913cb7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.148611 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t7bj\" (UniqueName: \"kubernetes.io/projected/7f262dc5-9bae-450c-ab81-3172ba82e700-kube-api-access-7t7bj\") pod \"controller-manager-879f6c89f-nq4vq\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.173356 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2c76\" (UniqueName: \"kubernetes.io/projected/9a656193-7884-4e3d-8a17-4ff680c4a116-kube-api-access-q2c76\") pod \"machine-api-operator-5694c8668f-q7lmj\" (UID: \"9a656193-7884-4e3d-8a17-4ff680c4a116\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.176489 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.195477 4805 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.216402 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.236034 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.242574 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.255834 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.276461 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.277014 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.285221 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.296088 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.296620 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.312025 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.351682 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.351827 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.427384 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrpjr\" (UniqueName: \"kubernetes.io/projected/2d03a289-7b6f-4264-83cc-fe33b26d5a2c-kube-api-access-hrpjr\") pod \"console-operator-58897d9998-28rcg\" (UID: \"2d03a289-7b6f-4264-83cc-fe33b26d5a2c\") " pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.428035 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s28js\" (UniqueName: \"kubernetes.io/projected/50e153c5-173b-4a28-a028-5ebf2ff22054-kube-api-access-s28js\") pod \"machine-approver-56656f9798-gz75j\" (UID: \"50e153c5-173b-4a28-a028-5ebf2ff22054\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.435379 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76cw7\" (UniqueName: \"kubernetes.io/projected/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-kube-api-access-76cw7\") pod \"console-f9d7485db-dxhld\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.479188 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.480676 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.481751 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.497605 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9dlq\" (UniqueName: \"kubernetes.io/projected/5a0ad740-271d-452e-b8be-d5e190a46719-kube-api-access-g9dlq\") pod \"authentication-operator-69f744f599-lzhxz\" (UID: \"5a0ad740-271d-452e-b8be-d5e190a46719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.545340 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.545471 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.550012 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.550459 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.550677 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.556663 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.580123 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.609686 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.661885 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.662089 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.662505 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.673464 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.673612 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.678162 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.684278 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.684389 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55e84bc6-30ab-46a1-8535-e9fa83eaaa5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kkf49\" (UID: \"55e84bc6-30ab-46a1-8535-e9fa83eaaa5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.691383 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55e84bc6-30ab-46a1-8535-e9fa83eaaa5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kkf49\" (UID: \"55e84bc6-30ab-46a1-8535-e9fa83eaaa5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.694401 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.696158 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.715812 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.750592 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.781446 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.781598 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.798057 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.843868 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.843814 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.854454 4805 request.go:700] Waited for 1.934010089s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.859829 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.881397 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.928627 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.928896 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.938757 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 16 11:56:57 crc kubenswrapper[4805]: I1216 11:56:57.991744 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcwwz\" (UniqueName: \"kubernetes.io/projected/832ba633-f44c-4aa1-8791-65656ed2a744-kube-api-access-bcwwz\") pod \"oauth-openshift-558db77b4-mwpbn\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.016205 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qm6md\" (UID: \"7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.028662 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9djnw\" (UniqueName: \"kubernetes.io/projected/184529ec-0250-4ed3-a1ef-4a5606202a85-kube-api-access-9djnw\") pod \"downloads-7954f5f757-s9j77\" (UID: \"184529ec-0250-4ed3-a1ef-4a5606202a85\") " pod="openshift-console/downloads-7954f5f757-s9j77" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.034293 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6697a04-3efc-4511-bd87-42630448efe0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wt247\" (UID: \"a6697a04-3efc-4511-bd87-42630448efe0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.060186 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcc6r\" (UniqueName: \"kubernetes.io/projected/be38a4fa-fb1e-4f8d-9887-b4232ed7c087-kube-api-access-qcc6r\") pod \"openshift-controller-manager-operator-756b6f6bc6-qpz4c\" (UID: \"be38a4fa-fb1e-4f8d-9887-b4232ed7c087\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.071606 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.132489 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.132511 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.133227 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg8j4\" (UniqueName: \"kubernetes.io/projected/e7893fbf-4354-4fc6-bc2a-fdadbdee1311-kube-api-access-lg8j4\") pod \"cluster-samples-operator-665b6dd947-zbcnq\" (UID: \"e7893fbf-4354-4fc6-bc2a-fdadbdee1311\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.133830 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-s9j77" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.134131 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.145967 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j"] Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.146696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx2vz\" (UniqueName: \"kubernetes.io/projected/ae46fc1b-666a-498f-92f4-673cb535530e-kube-api-access-xx2vz\") pod \"etcd-operator-b45778765-r2dsr\" (UID: \"ae46fc1b-666a-498f-92f4-673cb535530e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.164282 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.164864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" event={"ID":"50e153c5-173b-4a28-a028-5ebf2ff22054","Type":"ContainerStarted","Data":"13a54ecdb310b1c18c451f390246956d4253350f4309387f7730989e2a7bcc8d"} Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.178714 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmcvz\" (UniqueName: \"kubernetes.io/projected/a176db58-5c9e-4446-bccf-01eabe6612d2-kube-api-access-gmcvz\") pod \"kube-storage-version-migrator-operator-b67b599dd-2c94r\" (UID: \"a176db58-5c9e-4446-bccf-01eabe6612d2\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.179057 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfvr2\" (UniqueName: \"kubernetes.io/projected/eeb94cb2-1f1c-4547-93ab-895959638a88-kube-api-access-tfvr2\") pod \"csi-hostpathplugin-x9qqr\" (UID: \"eeb94cb2-1f1c-4547-93ab-895959638a88\") " pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.181621 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-prkth"] Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.183951 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x62j\" (UniqueName: \"kubernetes.io/projected/df2558d5-6ce0-4fb0-b689-fc8682a89744-kube-api-access-2x62j\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jccm\" (UID: \"df2558d5-6ce0-4fb0-b689-fc8682a89744\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.191873 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/424e23bf-3841-48b3-80c7-b1d73d8110c7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.208071 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7nbw\" (UniqueName: \"kubernetes.io/projected/1d60eaf3-b804-48a1-bc67-710f8ce5805f-kube-api-access-k7nbw\") pod \"router-default-5444994796-j4rx5\" (UID: \"1d60eaf3-b804-48a1-bc67-710f8ce5805f\") " pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.218154 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" Dec 16 11:56:58 crc kubenswrapper[4805]: W1216 11:56:58.221117 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbccfd36_acd0_4302_befd_f032ebddb856.slice/crio-83266438c4f4ecc26045136bf9e89a601a2a72892254152be6e51ec7cf4e0d29 WatchSource:0}: Error finding container 83266438c4f4ecc26045136bf9e89a601a2a72892254152be6e51ec7cf4e0d29: Status 404 returned error can't find the container with id 83266438c4f4ecc26045136bf9e89a601a2a72892254152be6e51ec7cf4e0d29 Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.222769 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a700da85-9bcf-4223-aaed-e9baa489beab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.227231 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.232428 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q7lmj"] Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.241805 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06647187-e41a-4641-9fab-7604122b8f0a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9lfcx\" (UID: \"06647187-e41a-4641-9fab-7604122b8f0a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.269964 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf2vf\" (UniqueName: \"kubernetes.io/projected/a700da85-9bcf-4223-aaed-e9baa489beab-kube-api-access-hf2vf\") pod \"ingress-operator-5b745b69d9-fnx4n\" (UID: \"a700da85-9bcf-4223-aaed-e9baa489beab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.292914 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.300759 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4459\" (UniqueName: \"kubernetes.io/projected/aaf86ef4-c14b-41b6-bd97-92a701b82562-kube-api-access-r4459\") pod \"machine-config-operator-74547568cd-k5chc\" (UID: \"aaf86ef4-c14b-41b6-bd97-92a701b82562\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.333315 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2db5\" (UniqueName: \"kubernetes.io/projected/55e84bc6-30ab-46a1-8535-e9fa83eaaa5b-kube-api-access-r2db5\") pod \"multus-admission-controller-857f4d67dd-kkf49\" (UID: \"55e84bc6-30ab-46a1-8535-e9fa83eaaa5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.343947 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.349523 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.354437 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.362949 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.363812 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94pw5\" (UniqueName: \"kubernetes.io/projected/424e23bf-3841-48b3-80c7-b1d73d8110c7-kube-api-access-94pw5\") pod \"cluster-image-registry-operator-dc59b4c8b-smtmw\" (UID: \"424e23bf-3841-48b3-80c7-b1d73d8110c7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.435121 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.436838 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.440850 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.441094 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.443631 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d782003-59ce-4968-a30c-5f12f3a9bd4f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9d7fz\" (UID: \"1d782003-59ce-4968-a30c-5f12f3a9bd4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.443674 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-config-volume\") pod \"collect-profiles-29431425-kc9qc\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.443710 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d782003-59ce-4968-a30c-5f12f3a9bd4f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9d7fz\" (UID: \"1d782003-59ce-4968-a30c-5f12f3a9bd4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.443730 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24e933f0-876c-4bf3-87dd-a2c6f7c69a6e-serving-cert\") pod \"service-ca-operator-777779d784-rbcrd\" (UID: \"24e933f0-876c-4bf3-87dd-a2c6f7c69a6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.443754 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw587\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-kube-api-access-jw587\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.443835 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxpqt\" (UniqueName: \"kubernetes.io/projected/25bbef6b-4746-41e9-83ef-20e9c54a7451-kube-api-access-zxpqt\") pod \"marketplace-operator-79b997595-sjff7\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.443861 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc4281fb-7921-4bde-89f0-059e9b9935e2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x58q8\" (UID: \"bc4281fb-7921-4bde-89f0-059e9b9935e2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.443903 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/25907081-958d-4fb5-a8d2-ce8454adedfb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.443925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc4281fb-7921-4bde-89f0-059e9b9935e2-proxy-tls\") pod \"machine-config-controller-84d6567774-x58q8\" (UID: \"bc4281fb-7921-4bde-89f0-059e9b9935e2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.443949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sjff7\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.443991 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d782003-59ce-4968-a30c-5f12f3a9bd4f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9d7fz\" (UID: \"1d782003-59ce-4968-a30c-5f12f3a9bd4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444046 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5r6t\" (UniqueName: \"kubernetes.io/projected/24e933f0-876c-4bf3-87dd-a2c6f7c69a6e-kube-api-access-z5r6t\") pod \"service-ca-operator-777779d784-rbcrd\" (UID: \"24e933f0-876c-4bf3-87dd-a2c6f7c69a6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444115 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k48z8\" (UniqueName: \"kubernetes.io/projected/bc4281fb-7921-4bde-89f0-059e9b9935e2-kube-api-access-k48z8\") pod \"machine-config-controller-84d6567774-x58q8\" (UID: \"bc4281fb-7921-4bde-89f0-059e9b9935e2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444240 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-tls\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444292 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-certificates\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444320 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sjff7\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444346 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4k89\" (UniqueName: \"kubernetes.io/projected/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-kube-api-access-s4k89\") pod \"collect-profiles-29431425-kc9qc\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444382 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-trusted-ca\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444415 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/98881921-2262-4a91-b9af-6d1d5207963f-tmpfs\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444461 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-secret-volume\") pod \"collect-profiles-29431425-kc9qc\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444526 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-bound-sa-token\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444549 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98881921-2262-4a91-b9af-6d1d5207963f-apiservice-cert\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444579 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/25907081-958d-4fb5-a8d2-ce8454adedfb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24e933f0-876c-4bf3-87dd-a2c6f7c69a6e-config\") pod \"service-ca-operator-777779d784-rbcrd\" (UID: \"24e933f0-876c-4bf3-87dd-a2c6f7c69a6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444661 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfv47\" (UniqueName: \"kubernetes.io/projected/98881921-2262-4a91-b9af-6d1d5207963f-kube-api-access-rfv47\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.444736 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98881921-2262-4a91-b9af-6d1d5207963f-webhook-cert\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: E1216 11:56:58.445392 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:56:58.945376549 +0000 UTC m=+92.663634414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.462790 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.462923 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.477604 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4jnvf"] Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.484423 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.507593 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.547817 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:56:58 crc kubenswrapper[4805]: E1216 11:56:58.547912 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:56:59.047893414 +0000 UTC m=+92.766151219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.549580 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfv47\" (UniqueName: \"kubernetes.io/projected/98881921-2262-4a91-b9af-6d1d5207963f-kube-api-access-rfv47\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.549642 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7306c7ab-58c7-4034-a406-ec666f9a70b9-config-volume\") pod \"dns-default-z69nw\" (UID: \"7306c7ab-58c7-4034-a406-ec666f9a70b9\") " pod="openshift-dns/dns-default-z69nw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.550029 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.550825 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98881921-2262-4a91-b9af-6d1d5207963f-webhook-cert\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.551786 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d7f81a73-07df-4bbe-ac9b-33897678a7ec-cert\") pod \"ingress-canary-2lp7s\" (UID: \"d7f81a73-07df-4bbe-ac9b-33897678a7ec\") " pod="openshift-ingress-canary/ingress-canary-2lp7s" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.551828 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ee60544b-9f19-48e5-84de-666fb4ca9110-node-bootstrap-token\") pod \"machine-config-server-tkhnx\" (UID: \"ee60544b-9f19-48e5-84de-666fb4ca9110\") " pod="openshift-machine-config-operator/machine-config-server-tkhnx" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.552009 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zktn8\" (UniqueName: \"kubernetes.io/projected/7306c7ab-58c7-4034-a406-ec666f9a70b9-kube-api-access-zktn8\") pod \"dns-default-z69nw\" (UID: \"7306c7ab-58c7-4034-a406-ec666f9a70b9\") " pod="openshift-dns/dns-default-z69nw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.552563 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b1679f25-d297-4683-b2e6-3657feb872eb-signing-key\") pod \"service-ca-9c57cc56f-9k8h7\" (UID: \"b1679f25-d297-4683-b2e6-3657feb872eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.554304 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhzxj\" (UniqueName: \"kubernetes.io/projected/d7f81a73-07df-4bbe-ac9b-33897678a7ec-kube-api-access-jhzxj\") pod \"ingress-canary-2lp7s\" (UID: \"d7f81a73-07df-4bbe-ac9b-33897678a7ec\") " pod="openshift-ingress-canary/ingress-canary-2lp7s" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.554612 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9e7c545a-021d-4b0a-a781-fd4f68258fa3-metrics-tls\") pod \"dns-operator-744455d44c-6ndp4\" (UID: \"9e7c545a-021d-4b0a-a781-fd4f68258fa3\") " pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.554692 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d782003-59ce-4968-a30c-5f12f3a9bd4f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9d7fz\" (UID: \"1d782003-59ce-4968-a30c-5f12f3a9bd4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.554752 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-config-volume\") pod \"collect-profiles-29431425-kc9qc\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.554800 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d782003-59ce-4968-a30c-5f12f3a9bd4f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9d7fz\" (UID: \"1d782003-59ce-4968-a30c-5f12f3a9bd4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.554838 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24e933f0-876c-4bf3-87dd-a2c6f7c69a6e-serving-cert\") pod \"service-ca-operator-777779d784-rbcrd\" (UID: \"24e933f0-876c-4bf3-87dd-a2c6f7c69a6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.554891 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw587\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-kube-api-access-jw587\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.554987 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b1679f25-d297-4683-b2e6-3657feb872eb-signing-cabundle\") pod \"service-ca-9c57cc56f-9k8h7\" (UID: \"b1679f25-d297-4683-b2e6-3657feb872eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.555056 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9c8k\" (UniqueName: \"kubernetes.io/projected/4e56b71e-6bab-4c7c-9d42-c20607af7311-kube-api-access-w9c8k\") pod \"package-server-manager-789f6589d5-wqm6m\" (UID: \"4e56b71e-6bab-4c7c-9d42-c20607af7311\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.555127 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmkn8\" (UniqueName: \"kubernetes.io/projected/ee60544b-9f19-48e5-84de-666fb4ca9110-kube-api-access-cmkn8\") pod \"machine-config-server-tkhnx\" (UID: \"ee60544b-9f19-48e5-84de-666fb4ca9110\") " pod="openshift-machine-config-operator/machine-config-server-tkhnx" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.555180 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxpqt\" (UniqueName: \"kubernetes.io/projected/25bbef6b-4746-41e9-83ef-20e9c54a7451-kube-api-access-zxpqt\") pod \"marketplace-operator-79b997595-sjff7\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.555248 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc4281fb-7921-4bde-89f0-059e9b9935e2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x58q8\" (UID: \"bc4281fb-7921-4bde-89f0-059e9b9935e2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.555325 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/25907081-958d-4fb5-a8d2-ce8454adedfb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.555379 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc4281fb-7921-4bde-89f0-059e9b9935e2-proxy-tls\") pod \"machine-config-controller-84d6567774-x58q8\" (UID: \"bc4281fb-7921-4bde-89f0-059e9b9935e2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.555423 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e56b71e-6bab-4c7c-9d42-c20607af7311-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wqm6m\" (UID: \"4e56b71e-6bab-4c7c-9d42-c20607af7311\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.555478 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sjff7\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.555517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7306c7ab-58c7-4034-a406-ec666f9a70b9-metrics-tls\") pod \"dns-default-z69nw\" (UID: \"7306c7ab-58c7-4034-a406-ec666f9a70b9\") " pod="openshift-dns/dns-default-z69nw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.555551 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwg97\" (UniqueName: \"kubernetes.io/projected/9e7c545a-021d-4b0a-a781-fd4f68258fa3-kube-api-access-rwg97\") pod \"dns-operator-744455d44c-6ndp4\" (UID: \"9e7c545a-021d-4b0a-a781-fd4f68258fa3\") " pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.555647 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d782003-59ce-4968-a30c-5f12f3a9bd4f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9d7fz\" (UID: \"1d782003-59ce-4968-a30c-5f12f3a9bd4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.556268 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/25907081-958d-4fb5-a8d2-ce8454adedfb-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.557089 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d782003-59ce-4968-a30c-5f12f3a9bd4f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9d7fz\" (UID: \"1d782003-59ce-4968-a30c-5f12f3a9bd4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.558034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-config-volume\") pod \"collect-profiles-29431425-kc9qc\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.558126 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dndzz\" (UniqueName: \"kubernetes.io/projected/54c6a431-7172-4e4c-b283-24c980b51826-kube-api-access-dndzz\") pod \"migrator-59844c95c7-gsh2j\" (UID: \"54c6a431-7172-4e4c-b283-24c980b51826\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.562100 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d782003-59ce-4968-a30c-5f12f3a9bd4f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9d7fz\" (UID: \"1d782003-59ce-4968-a30c-5f12f3a9bd4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.565951 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.566067 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5r6t\" (UniqueName: \"kubernetes.io/projected/24e933f0-876c-4bf3-87dd-a2c6f7c69a6e-kube-api-access-z5r6t\") pod \"service-ca-operator-777779d784-rbcrd\" (UID: \"24e933f0-876c-4bf3-87dd-a2c6f7c69a6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.577035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k48z8\" (UniqueName: \"kubernetes.io/projected/bc4281fb-7921-4bde-89f0-059e9b9935e2-kube-api-access-k48z8\") pod \"machine-config-controller-84d6567774-x58q8\" (UID: \"bc4281fb-7921-4bde-89f0-059e9b9935e2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.578020 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-tls\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.578155 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-certificates\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.578185 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sjff7\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.581182 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sjff7\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.582764 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4k89\" (UniqueName: \"kubernetes.io/projected/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-kube-api-access-s4k89\") pod \"collect-profiles-29431425-kc9qc\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.582885 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-trusted-ca\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: E1216 11:56:58.583985 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:56:59.083963053 +0000 UTC m=+92.802221028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.586083 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-trusted-ca\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.587073 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/98881921-2262-4a91-b9af-6d1d5207963f-tmpfs\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.592281 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/98881921-2262-4a91-b9af-6d1d5207963f-tmpfs\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.594053 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-tls\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.594506 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc4281fb-7921-4bde-89f0-059e9b9935e2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x58q8\" (UID: \"bc4281fb-7921-4bde-89f0-059e9b9935e2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.594860 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-secret-volume\") pod \"collect-profiles-29431425-kc9qc\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.595744 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-certificates\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.595796 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwgz\" (UniqueName: \"kubernetes.io/projected/b1679f25-d297-4683-b2e6-3657feb872eb-kube-api-access-jhwgz\") pod \"service-ca-9c57cc56f-9k8h7\" (UID: \"b1679f25-d297-4683-b2e6-3657feb872eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.596658 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ee60544b-9f19-48e5-84de-666fb4ca9110-certs\") pod \"machine-config-server-tkhnx\" (UID: \"ee60544b-9f19-48e5-84de-666fb4ca9110\") " pod="openshift-machine-config-operator/machine-config-server-tkhnx" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.597986 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-bound-sa-token\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.598033 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98881921-2262-4a91-b9af-6d1d5207963f-apiservice-cert\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.599030 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/25907081-958d-4fb5-a8d2-ce8454adedfb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.604100 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/25907081-958d-4fb5-a8d2-ce8454adedfb-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.605096 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24e933f0-876c-4bf3-87dd-a2c6f7c69a6e-config\") pod \"service-ca-operator-777779d784-rbcrd\" (UID: \"24e933f0-876c-4bf3-87dd-a2c6f7c69a6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.610523 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxpqt\" (UniqueName: \"kubernetes.io/projected/25bbef6b-4746-41e9-83ef-20e9c54a7451-kube-api-access-zxpqt\") pod \"marketplace-operator-79b997595-sjff7\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.612022 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d782003-59ce-4968-a30c-5f12f3a9bd4f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9d7fz\" (UID: \"1d782003-59ce-4968-a30c-5f12f3a9bd4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.614075 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24e933f0-876c-4bf3-87dd-a2c6f7c69a6e-config\") pod \"service-ca-operator-777779d784-rbcrd\" (UID: \"24e933f0-876c-4bf3-87dd-a2c6f7c69a6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.619427 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98881921-2262-4a91-b9af-6d1d5207963f-apiservice-cert\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.623239 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sjff7\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.623942 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc4281fb-7921-4bde-89f0-059e9b9935e2-proxy-tls\") pod \"machine-config-controller-84d6567774-x58q8\" (UID: \"bc4281fb-7921-4bde-89f0-059e9b9935e2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.624194 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98881921-2262-4a91-b9af-6d1d5207963f-webhook-cert\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.624360 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24e933f0-876c-4bf3-87dd-a2c6f7c69a6e-serving-cert\") pod \"service-ca-operator-777779d784-rbcrd\" (UID: \"24e933f0-876c-4bf3-87dd-a2c6f7c69a6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.624909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfv47\" (UniqueName: \"kubernetes.io/projected/98881921-2262-4a91-b9af-6d1d5207963f-kube-api-access-rfv47\") pod \"packageserver-d55dfcdfc-tzztn\" (UID: \"98881921-2262-4a91-b9af-6d1d5207963f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.625285 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf"] Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.625328 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv"] Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.625342 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nq4vq"] Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.625353 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dxhld"] Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.656870 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-secret-volume\") pod \"collect-profiles-29431425-kc9qc\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.663044 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw587\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-kube-api-access-jw587\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: W1216 11:56:58.663518 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58ff610a_1f7d_4e29_83c2_a95691a7dbc3.slice/crio-379982bb8bc65714fa0d40557628f315c5deda194c5366c17989d96647236cfe WatchSource:0}: Error finding container 379982bb8bc65714fa0d40557628f315c5deda194c5366c17989d96647236cfe: Status 404 returned error can't find the container with id 379982bb8bc65714fa0d40557628f315c5deda194c5366c17989d96647236cfe Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.680643 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5r6t\" (UniqueName: \"kubernetes.io/projected/24e933f0-876c-4bf3-87dd-a2c6f7c69a6e-kube-api-access-z5r6t\") pod \"service-ca-operator-777779d784-rbcrd\" (UID: \"24e933f0-876c-4bf3-87dd-a2c6f7c69a6e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.726888 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lzhxz"] Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.735702 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.736741 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/796d11e4-b3f6-46b1-ac93-5f7738a3cd28-srv-cert\") pod \"olm-operator-6b444d44fb-k5tfc\" (UID: \"796d11e4-b3f6-46b1-ac93-5f7738a3cd28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.736814 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhwgz\" (UniqueName: \"kubernetes.io/projected/b1679f25-d297-4683-b2e6-3657feb872eb-kube-api-access-jhwgz\") pod \"service-ca-9c57cc56f-9k8h7\" (UID: \"b1679f25-d297-4683-b2e6-3657feb872eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.736845 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/01fa8b28-cdb8-4c39-917e-af29958ec710-profile-collector-cert\") pod \"catalog-operator-68c6474976-stgk9\" (UID: \"01fa8b28-cdb8-4c39-917e-af29958ec710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.736874 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ee60544b-9f19-48e5-84de-666fb4ca9110-certs\") pod \"machine-config-server-tkhnx\" (UID: \"ee60544b-9f19-48e5-84de-666fb4ca9110\") " pod="openshift-machine-config-operator/machine-config-server-tkhnx" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.737007 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/01fa8b28-cdb8-4c39-917e-af29958ec710-srv-cert\") pod \"catalog-operator-68c6474976-stgk9\" (UID: \"01fa8b28-cdb8-4c39-917e-af29958ec710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.737076 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7306c7ab-58c7-4034-a406-ec666f9a70b9-config-volume\") pod \"dns-default-z69nw\" (UID: \"7306c7ab-58c7-4034-a406-ec666f9a70b9\") " pod="openshift-dns/dns-default-z69nw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.737112 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ee60544b-9f19-48e5-84de-666fb4ca9110-node-bootstrap-token\") pod \"machine-config-server-tkhnx\" (UID: \"ee60544b-9f19-48e5-84de-666fb4ca9110\") " pod="openshift-machine-config-operator/machine-config-server-tkhnx" Dec 16 11:56:58 crc kubenswrapper[4805]: E1216 11:56:58.737769 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:56:59.237746596 +0000 UTC m=+92.956004401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.739893 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d7f81a73-07df-4bbe-ac9b-33897678a7ec-cert\") pod \"ingress-canary-2lp7s\" (UID: \"d7f81a73-07df-4bbe-ac9b-33897678a7ec\") " pod="openshift-ingress-canary/ingress-canary-2lp7s" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.739973 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zktn8\" (UniqueName: \"kubernetes.io/projected/7306c7ab-58c7-4034-a406-ec666f9a70b9-kube-api-access-zktn8\") pod \"dns-default-z69nw\" (UID: \"7306c7ab-58c7-4034-a406-ec666f9a70b9\") " pod="openshift-dns/dns-default-z69nw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740343 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b1679f25-d297-4683-b2e6-3657feb872eb-signing-key\") pod \"service-ca-9c57cc56f-9k8h7\" (UID: \"b1679f25-d297-4683-b2e6-3657feb872eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740379 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6kjw\" (UniqueName: \"kubernetes.io/projected/796d11e4-b3f6-46b1-ac93-5f7738a3cd28-kube-api-access-p6kjw\") pod \"olm-operator-6b444d44fb-k5tfc\" (UID: \"796d11e4-b3f6-46b1-ac93-5f7738a3cd28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740430 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhzxj\" (UniqueName: \"kubernetes.io/projected/d7f81a73-07df-4bbe-ac9b-33897678a7ec-kube-api-access-jhzxj\") pod \"ingress-canary-2lp7s\" (UID: \"d7f81a73-07df-4bbe-ac9b-33897678a7ec\") " pod="openshift-ingress-canary/ingress-canary-2lp7s" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740483 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9e7c545a-021d-4b0a-a781-fd4f68258fa3-metrics-tls\") pod \"dns-operator-744455d44c-6ndp4\" (UID: \"9e7c545a-021d-4b0a-a781-fd4f68258fa3\") " pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740539 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b1679f25-d297-4683-b2e6-3657feb872eb-signing-cabundle\") pod \"service-ca-9c57cc56f-9k8h7\" (UID: \"b1679f25-d297-4683-b2e6-3657feb872eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740577 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vwdh\" (UniqueName: \"kubernetes.io/projected/01fa8b28-cdb8-4c39-917e-af29958ec710-kube-api-access-2vwdh\") pod \"catalog-operator-68c6474976-stgk9\" (UID: \"01fa8b28-cdb8-4c39-917e-af29958ec710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740756 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9c8k\" (UniqueName: \"kubernetes.io/projected/4e56b71e-6bab-4c7c-9d42-c20607af7311-kube-api-access-w9c8k\") pod \"package-server-manager-789f6589d5-wqm6m\" (UID: \"4e56b71e-6bab-4c7c-9d42-c20607af7311\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740795 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmkn8\" (UniqueName: \"kubernetes.io/projected/ee60544b-9f19-48e5-84de-666fb4ca9110-kube-api-access-cmkn8\") pod \"machine-config-server-tkhnx\" (UID: \"ee60544b-9f19-48e5-84de-666fb4ca9110\") " pod="openshift-machine-config-operator/machine-config-server-tkhnx" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740857 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e56b71e-6bab-4c7c-9d42-c20607af7311-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wqm6m\" (UID: \"4e56b71e-6bab-4c7c-9d42-c20607af7311\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740892 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/796d11e4-b3f6-46b1-ac93-5f7738a3cd28-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k5tfc\" (UID: \"796d11e4-b3f6-46b1-ac93-5f7738a3cd28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740925 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwg97\" (UniqueName: \"kubernetes.io/projected/9e7c545a-021d-4b0a-a781-fd4f68258fa3-kube-api-access-rwg97\") pod \"dns-operator-744455d44c-6ndp4\" (UID: \"9e7c545a-021d-4b0a-a781-fd4f68258fa3\") " pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.740948 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7306c7ab-58c7-4034-a406-ec666f9a70b9-metrics-tls\") pod \"dns-default-z69nw\" (UID: \"7306c7ab-58c7-4034-a406-ec666f9a70b9\") " pod="openshift-dns/dns-default-z69nw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.742466 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dndzz\" (UniqueName: \"kubernetes.io/projected/54c6a431-7172-4e4c-b283-24c980b51826-kube-api-access-dndzz\") pod \"migrator-59844c95c7-gsh2j\" (UID: \"54c6a431-7172-4e4c-b283-24c980b51826\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.742748 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: E1216 11:56:58.743869 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:56:59.243851462 +0000 UTC m=+92.962109267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.747516 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b1679f25-d297-4683-b2e6-3657feb872eb-signing-cabundle\") pod \"service-ca-9c57cc56f-9k8h7\" (UID: \"b1679f25-d297-4683-b2e6-3657feb872eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.750781 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7306c7ab-58c7-4034-a406-ec666f9a70b9-config-volume\") pod \"dns-default-z69nw\" (UID: \"7306c7ab-58c7-4034-a406-ec666f9a70b9\") " pod="openshift-dns/dns-default-z69nw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.793077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ee60544b-9f19-48e5-84de-666fb4ca9110-node-bootstrap-token\") pod \"machine-config-server-tkhnx\" (UID: \"ee60544b-9f19-48e5-84de-666fb4ca9110\") " pod="openshift-machine-config-operator/machine-config-server-tkhnx" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.818313 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b1679f25-d297-4683-b2e6-3657feb872eb-signing-key\") pod \"service-ca-9c57cc56f-9k8h7\" (UID: \"b1679f25-d297-4683-b2e6-3657feb872eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.818426 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-28rcg"] Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.834438 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k48z8\" (UniqueName: \"kubernetes.io/projected/bc4281fb-7921-4bde-89f0-059e9b9935e2-kube-api-access-k48z8\") pod \"machine-config-controller-84d6567774-x58q8\" (UID: \"bc4281fb-7921-4bde-89f0-059e9b9935e2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.838294 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d7f81a73-07df-4bbe-ac9b-33897678a7ec-cert\") pod \"ingress-canary-2lp7s\" (UID: \"d7f81a73-07df-4bbe-ac9b-33897678a7ec\") " pod="openshift-ingress-canary/ingress-canary-2lp7s" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.838923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7306c7ab-58c7-4034-a406-ec666f9a70b9-metrics-tls\") pod \"dns-default-z69nw\" (UID: \"7306c7ab-58c7-4034-a406-ec666f9a70b9\") " pod="openshift-dns/dns-default-z69nw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.844302 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwpbn"] Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.845892 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.846344 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6kjw\" (UniqueName: \"kubernetes.io/projected/796d11e4-b3f6-46b1-ac93-5f7738a3cd28-kube-api-access-p6kjw\") pod \"olm-operator-6b444d44fb-k5tfc\" (UID: \"796d11e4-b3f6-46b1-ac93-5f7738a3cd28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.846407 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vwdh\" (UniqueName: \"kubernetes.io/projected/01fa8b28-cdb8-4c39-917e-af29958ec710-kube-api-access-2vwdh\") pod \"catalog-operator-68c6474976-stgk9\" (UID: \"01fa8b28-cdb8-4c39-917e-af29958ec710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.846470 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/796d11e4-b3f6-46b1-ac93-5f7738a3cd28-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k5tfc\" (UID: \"796d11e4-b3f6-46b1-ac93-5f7738a3cd28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.846541 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/796d11e4-b3f6-46b1-ac93-5f7738a3cd28-srv-cert\") pod \"olm-operator-6b444d44fb-k5tfc\" (UID: \"796d11e4-b3f6-46b1-ac93-5f7738a3cd28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.846577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/01fa8b28-cdb8-4c39-917e-af29958ec710-profile-collector-cert\") pod \"catalog-operator-68c6474976-stgk9\" (UID: \"01fa8b28-cdb8-4c39-917e-af29958ec710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.846621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/01fa8b28-cdb8-4c39-917e-af29958ec710-srv-cert\") pod \"catalog-operator-68c6474976-stgk9\" (UID: \"01fa8b28-cdb8-4c39-917e-af29958ec710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.850257 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.850801 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" Dec 16 11:56:58 crc kubenswrapper[4805]: E1216 11:56:58.857127 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:56:59.357101796 +0000 UTC m=+93.075359591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.863070 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4k89\" (UniqueName: \"kubernetes.io/projected/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-kube-api-access-s4k89\") pod \"collect-profiles-29431425-kc9qc\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.865349 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-bound-sa-token\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.869258 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.869528 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/796d11e4-b3f6-46b1-ac93-5f7738a3cd28-srv-cert\") pod \"olm-operator-6b444d44fb-k5tfc\" (UID: \"796d11e4-b3f6-46b1-ac93-5f7738a3cd28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.875275 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/01fa8b28-cdb8-4c39-917e-af29958ec710-srv-cert\") pod \"catalog-operator-68c6474976-stgk9\" (UID: \"01fa8b28-cdb8-4c39-917e-af29958ec710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.874609 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e56b71e-6bab-4c7c-9d42-c20607af7311-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wqm6m\" (UID: \"4e56b71e-6bab-4c7c-9d42-c20607af7311\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.878097 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhwgz\" (UniqueName: \"kubernetes.io/projected/b1679f25-d297-4683-b2e6-3657feb872eb-kube-api-access-jhwgz\") pod \"service-ca-9c57cc56f-9k8h7\" (UID: \"b1679f25-d297-4683-b2e6-3657feb872eb\") " pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.895996 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.899352 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9e7c545a-021d-4b0a-a781-fd4f68258fa3-metrics-tls\") pod \"dns-operator-744455d44c-6ndp4\" (UID: \"9e7c545a-021d-4b0a-a781-fd4f68258fa3\") " pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.910365 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/796d11e4-b3f6-46b1-ac93-5f7738a3cd28-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k5tfc\" (UID: \"796d11e4-b3f6-46b1-ac93-5f7738a3cd28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.910516 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9c8k\" (UniqueName: \"kubernetes.io/projected/4e56b71e-6bab-4c7c-9d42-c20607af7311-kube-api-access-w9c8k\") pod \"package-server-manager-789f6589d5-wqm6m\" (UID: \"4e56b71e-6bab-4c7c-9d42-c20607af7311\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.911632 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dndzz\" (UniqueName: \"kubernetes.io/projected/54c6a431-7172-4e4c-b283-24c980b51826-kube-api-access-dndzz\") pod \"migrator-59844c95c7-gsh2j\" (UID: \"54c6a431-7172-4e4c-b283-24c980b51826\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.912889 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhzxj\" (UniqueName: \"kubernetes.io/projected/d7f81a73-07df-4bbe-ac9b-33897678a7ec-kube-api-access-jhzxj\") pod \"ingress-canary-2lp7s\" (UID: \"d7f81a73-07df-4bbe-ac9b-33897678a7ec\") " pod="openshift-ingress-canary/ingress-canary-2lp7s" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.958403 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/01fa8b28-cdb8-4c39-917e-af29958ec710-profile-collector-cert\") pod \"catalog-operator-68c6474976-stgk9\" (UID: \"01fa8b28-cdb8-4c39-917e-af29958ec710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.958852 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmkn8\" (UniqueName: \"kubernetes.io/projected/ee60544b-9f19-48e5-84de-666fb4ca9110-kube-api-access-cmkn8\") pod \"machine-config-server-tkhnx\" (UID: \"ee60544b-9f19-48e5-84de-666fb4ca9110\") " pod="openshift-machine-config-operator/machine-config-server-tkhnx" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.959685 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.960267 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zktn8\" (UniqueName: \"kubernetes.io/projected/7306c7ab-58c7-4034-a406-ec666f9a70b9-kube-api-access-zktn8\") pod \"dns-default-z69nw\" (UID: \"7306c7ab-58c7-4034-a406-ec666f9a70b9\") " pod="openshift-dns/dns-default-z69nw" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.964089 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwg97\" (UniqueName: \"kubernetes.io/projected/9e7c545a-021d-4b0a-a781-fd4f68258fa3-kube-api-access-rwg97\") pod \"dns-operator-744455d44c-6ndp4\" (UID: \"9e7c545a-021d-4b0a-a781-fd4f68258fa3\") " pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.964468 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ee60544b-9f19-48e5-84de-666fb4ca9110-certs\") pod \"machine-config-server-tkhnx\" (UID: \"ee60544b-9f19-48e5-84de-666fb4ca9110\") " pod="openshift-machine-config-operator/machine-config-server-tkhnx" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.969916 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.970303 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" Dec 16 11:56:58 crc kubenswrapper[4805]: E1216 11:56:58.971530 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:56:59.471506553 +0000 UTC m=+93.189764518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.985933 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" Dec 16 11:56:58 crc kubenswrapper[4805]: I1216 11:56:58.997209 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j" Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.003185 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vwdh\" (UniqueName: \"kubernetes.io/projected/01fa8b28-cdb8-4c39-917e-af29958ec710-kube-api-access-2vwdh\") pod \"catalog-operator-68c6474976-stgk9\" (UID: \"01fa8b28-cdb8-4c39-917e-af29958ec710\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.003251 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6kjw\" (UniqueName: \"kubernetes.io/projected/796d11e4-b3f6-46b1-ac93-5f7738a3cd28-kube-api-access-p6kjw\") pod \"olm-operator-6b444d44fb-k5tfc\" (UID: \"796d11e4-b3f6-46b1-ac93-5f7738a3cd28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.010794 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.027299 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tkhnx" Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.037586 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.057282 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.083924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2lp7s" Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.084485 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z69nw" Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.084524 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.085021 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:56:59 crc kubenswrapper[4805]: E1216 11:56:59.085347 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:56:59.585314223 +0000 UTC m=+93.303572028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.153484 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.185338 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.188100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:59 crc kubenswrapper[4805]: E1216 11:56:59.188788 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:56:59.688775735 +0000 UTC m=+93.407033540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.196072 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" event={"ID":"5a0ad740-271d-452e-b8be-d5e190a46719","Type":"ContainerStarted","Data":"1a246bef6a5a1e41f6e26d8164084690239a579a15a11ae34ddaa7dca028adc4"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.221545 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" event={"ID":"0b9eab42-29fd-4d17-8c0d-57eb48913cb7","Type":"ContainerStarted","Data":"0033324a91038e153d1fb426b2fb4b49a2471fbff1a59b0eddceb411f374dcb7"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.228768 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" event={"ID":"5826956e-b8ea-4053-8987-ebe9f350c975","Type":"ContainerStarted","Data":"46f3075b880ef69183c58032732fdbeb29035f39102f112b3d99f92c782a407d"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.235129 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-s9j77"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.236494 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" event={"ID":"dbccfd36-acd0-4302-befd-f032ebddb856","Type":"ContainerStarted","Data":"83266438c4f4ecc26045136bf9e89a601a2a72892254152be6e51ec7cf4e0d29"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.260609 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x9qqr"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.281843 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.297825 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:56:59 crc kubenswrapper[4805]: E1216 11:56:59.298107 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:56:59.798092365 +0000 UTC m=+93.516350170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.301201 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.317728 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" event={"ID":"7f262dc5-9bae-450c-ab81-3172ba82e700","Type":"ContainerStarted","Data":"6464d92d67da9311ec9d5dafb3a6de00752b7bbcd0f5ba9225a079d10b993d5b"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.339426 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.360732 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.361560 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" event={"ID":"a6697a04-3efc-4511-bd87-42630448efe0","Type":"ContainerStarted","Data":"5a5a470e2e6e2dbaed3735f37bb92cdfc98b966fab4746b0ba6baad2dbd78b5f"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.361632 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" event={"ID":"a6697a04-3efc-4511-bd87-42630448efe0","Type":"ContainerStarted","Data":"a97da0718d0fe10de8b9db17c25aa80253963aafebfcdd75d682d008d3782fc9"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.370403 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" event={"ID":"58ff610a-1f7d-4e29-83c2-a95691a7dbc3","Type":"ContainerStarted","Data":"379982bb8bc65714fa0d40557628f315c5deda194c5366c17989d96647236cfe"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.387215 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-28rcg" event={"ID":"2d03a289-7b6f-4264-83cc-fe33b26d5a2c","Type":"ContainerStarted","Data":"031f0e0e5563fb49952db30825570388b9beb5eb9f4d32a5d6d339175ed86cbb"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.389935 4805 generic.go:334] "Generic (PLEG): container finished" podID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerID="4f1f17ca597e6bc336341a282cdc5f763b1f44902a7ac301f58fffe94ffce4f3" exitCode=0 Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.390007 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" event={"ID":"2377e2e1-2de4-46e9-a0a9-768f5dd52317","Type":"ContainerDied","Data":"4f1f17ca597e6bc336341a282cdc5f763b1f44902a7ac301f58fffe94ffce4f3"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.390040 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" event={"ID":"2377e2e1-2de4-46e9-a0a9-768f5dd52317","Type":"ContainerStarted","Data":"6f94188cefc5cf90ab8b5976dc8e8a1fa98e1f24cef235818bc33f8e30e5fb69"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.395105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" event={"ID":"9a656193-7884-4e3d-8a17-4ff680c4a116","Type":"ContainerStarted","Data":"8c663c83a7ccb1fef56ee1044233782cbe843a2f2ee36672c83032acb2e817e5"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.397641 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dxhld" event={"ID":"69858fb6-30f7-4b9b-a240-c95b4aa2de5a","Type":"ContainerStarted","Data":"9d611048d7abc6e42c55631ceeb86f6a0a5d02ed29298504e96d41d94d0bbaac"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.399651 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:59 crc kubenswrapper[4805]: E1216 11:56:59.401345 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:56:59.90132362 +0000 UTC m=+93.619581495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.402813 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" event={"ID":"832ba633-f44c-4aa1-8791-65656ed2a744","Type":"ContainerStarted","Data":"77fc86a82e094a301d5737e72d87db3a216ca4893e7e853dbc8a7b6228506833"} Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.423047 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" event={"ID":"50e153c5-173b-4a28-a028-5ebf2ff22054","Type":"ContainerStarted","Data":"35eda826864855705dbf24332c158b9f86832dad560b8983a47ba053535ba93b"} Dec 16 11:56:59 crc kubenswrapper[4805]: W1216 11:56:59.439950 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod184529ec_0250_4ed3_a1ef_4a5606202a85.slice/crio-5f189ddcec9def031a4a5e832d6cd38e1401a8e024c80648ad7ea2a2f8abdd3a WatchSource:0}: Error finding container 5f189ddcec9def031a4a5e832d6cd38e1401a8e024c80648ad7ea2a2f8abdd3a: Status 404 returned error can't find the container with id 5f189ddcec9def031a4a5e832d6cd38e1401a8e024c80648ad7ea2a2f8abdd3a Dec 16 11:56:59 crc kubenswrapper[4805]: W1216 11:56:59.485742 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7efbfb18_2b26_4c71_9ef4_e9b7966fe2ba.slice/crio-f4e10339fa0d72a8cf01cecc3b1007aef1bfdbbe22563d5e0ef69363bb2feb16 WatchSource:0}: Error finding container f4e10339fa0d72a8cf01cecc3b1007aef1bfdbbe22563d5e0ef69363bb2feb16: Status 404 returned error can't find the container with id f4e10339fa0d72a8cf01cecc3b1007aef1bfdbbe22563d5e0ef69363bb2feb16 Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.495186 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.501025 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:56:59 crc kubenswrapper[4805]: E1216 11:56:59.501194 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.001162238 +0000 UTC m=+93.719420053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.501507 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:59 crc kubenswrapper[4805]: E1216 11:56:59.502352 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.002334112 +0000 UTC m=+93.720592107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.560949 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.604527 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:56:59 crc kubenswrapper[4805]: E1216 11:56:59.605506 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.105475824 +0000 UTC m=+93.823733629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.701302 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-r2dsr"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.709605 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:59 crc kubenswrapper[4805]: E1216 11:56:59.710477 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.21045798 +0000 UTC m=+93.928715785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.714062 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kkf49"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.742924 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wt247" podStartSLOduration=71.742907285 podStartE2EDuration="1m11.742907285s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:56:59.740381202 +0000 UTC m=+93.458639017" watchObservedRunningTime="2025-12-16 11:56:59.742907285 +0000 UTC m=+93.461165100" Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.743908 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.770042 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8"] Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.816060 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:56:59 crc kubenswrapper[4805]: E1216 11:56:59.816828 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.316790205 +0000 UTC m=+94.035048010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.817472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:56:59 crc kubenswrapper[4805]: E1216 11:56:59.818208 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.318191275 +0000 UTC m=+94.036449080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:56:59 crc kubenswrapper[4805]: W1216 11:56:59.820938 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaf86ef4_c14b_41b6_bd97_92a701b82562.slice/crio-66202b4fb39edf217eac372a927a46840933ac6e246f9ab5a300ad3b4baeae5f WatchSource:0}: Error finding container 66202b4fb39edf217eac372a927a46840933ac6e246f9ab5a300ad3b4baeae5f: Status 404 returned error can't find the container with id 66202b4fb39edf217eac372a927a46840933ac6e246f9ab5a300ad3b4baeae5f Dec 16 11:56:59 crc kubenswrapper[4805]: W1216 11:56:59.828050 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae46fc1b_666a_498f_92f4_673cb535530e.slice/crio-5bcc52fb934157061103faf276765cdaf7fb02b2e2143b92a6dbd64fd098033e WatchSource:0}: Error finding container 5bcc52fb934157061103faf276765cdaf7fb02b2e2143b92a6dbd64fd098033e: Status 404 returned error can't find the container with id 5bcc52fb934157061103faf276765cdaf7fb02b2e2143b92a6dbd64fd098033e Dec 16 11:56:59 crc kubenswrapper[4805]: W1216 11:56:59.869741 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod424e23bf_3841_48b3_80c7_b1d73d8110c7.slice/crio-640681467db310a3dd7feb20cf5a98946aa1d64e11fb7a7d2c8f46ef6589b8f3 WatchSource:0}: Error finding container 640681467db310a3dd7feb20cf5a98946aa1d64e11fb7a7d2c8f46ef6589b8f3: Status 404 returned error can't find the container with id 640681467db310a3dd7feb20cf5a98946aa1d64e11fb7a7d2c8f46ef6589b8f3 Dec 16 11:56:59 crc kubenswrapper[4805]: I1216 11:56:59.919614 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:56:59 crc kubenswrapper[4805]: E1216 11:56:59.919932 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.419911407 +0000 UTC m=+94.138169212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.023119 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:00 crc kubenswrapper[4805]: E1216 11:57:00.023916 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.523871983 +0000 UTC m=+94.242129788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.129688 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:00 crc kubenswrapper[4805]: E1216 11:57:00.132895 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.632836653 +0000 UTC m=+94.351094588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.133130 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:00 crc kubenswrapper[4805]: E1216 11:57:00.133946 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.633923604 +0000 UTC m=+94.352181409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.233832 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:00 crc kubenswrapper[4805]: E1216 11:57:00.234639 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.734623776 +0000 UTC m=+94.452881571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.292573 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz"] Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.341731 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:00 crc kubenswrapper[4805]: E1216 11:57:00.343482 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.843459473 +0000 UTC m=+94.561717278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.443430 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:00 crc kubenswrapper[4805]: E1216 11:57:00.444305 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:00.944287019 +0000 UTC m=+94.662544824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.494943 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" event={"ID":"eeb94cb2-1f1c-4547-93ab-895959638a88","Type":"ContainerStarted","Data":"b705353bf0940ed26ae05aba8ff6c082e8c9b812e56d2fb760fd220f3495ebed"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.497583 4805 generic.go:334] "Generic (PLEG): container finished" podID="dbccfd36-acd0-4302-befd-f032ebddb856" containerID="29dfbfd7f89e1b1c4147a6de89bec369aac141141ce6c8c0ef583ba70dc917fe" exitCode=0 Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.497644 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" event={"ID":"dbccfd36-acd0-4302-befd-f032ebddb856","Type":"ContainerDied","Data":"29dfbfd7f89e1b1c4147a6de89bec369aac141141ce6c8c0ef583ba70dc917fe"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.501896 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" event={"ID":"ae46fc1b-666a-498f-92f4-673cb535530e","Type":"ContainerStarted","Data":"5bcc52fb934157061103faf276765cdaf7fb02b2e2143b92a6dbd64fd098033e"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.502919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" event={"ID":"7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba","Type":"ContainerStarted","Data":"f4e10339fa0d72a8cf01cecc3b1007aef1bfdbbe22563d5e0ef69363bb2feb16"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.503909 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" event={"ID":"a176db58-5c9e-4446-bccf-01eabe6612d2","Type":"ContainerStarted","Data":"eb470d9c9ebbc392f583c0d8802ce2d03cf48ad89a6ae415820855e03580391b"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.505541 4805 generic.go:334] "Generic (PLEG): container finished" podID="5826956e-b8ea-4053-8987-ebe9f350c975" containerID="8a151dab4eff9f967c5c7a327ca743c5104ed4f712a8ae722fbf1ed5b143b4d7" exitCode=0 Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.505598 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" event={"ID":"5826956e-b8ea-4053-8987-ebe9f350c975","Type":"ContainerDied","Data":"8a151dab4eff9f967c5c7a327ca743c5104ed4f712a8ae722fbf1ed5b143b4d7"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.561392 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:00 crc kubenswrapper[4805]: E1216 11:57:00.562029 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:01.062010962 +0000 UTC m=+94.780268767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.579121 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" event={"ID":"424e23bf-3841-48b3-80c7-b1d73d8110c7","Type":"ContainerStarted","Data":"640681467db310a3dd7feb20cf5a98946aa1d64e11fb7a7d2c8f46ef6589b8f3"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.579972 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" event={"ID":"be38a4fa-fb1e-4f8d-9887-b4232ed7c087","Type":"ContainerStarted","Data":"21d616ff22a087c2302e7e905205743ea1bdd3fd6306dbf4dbd761eb7e9f9429"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.581353 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-j4rx5" event={"ID":"1d60eaf3-b804-48a1-bc67-710f8ce5805f","Type":"ContainerStarted","Data":"aa13e1e34785857e844579db0cf5baae59175f5fe9cb802ebc78b22cc1249929"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.592809 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" event={"ID":"06647187-e41a-4641-9fab-7604122b8f0a","Type":"ContainerStarted","Data":"dbf630b02f6d02fc5758835981f72519836ebbf7ebdb84036dce1618cbcc15a5"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.621180 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" event={"ID":"58ff610a-1f7d-4e29-83c2-a95691a7dbc3","Type":"ContainerStarted","Data":"ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.622314 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.623620 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" event={"ID":"a700da85-9bcf-4223-aaed-e9baa489beab","Type":"ContainerStarted","Data":"4a51e87cc4accae496212615ced40f41777c45af4fc9de9d4df7c3a45a239222"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.626057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" event={"ID":"9a656193-7884-4e3d-8a17-4ff680c4a116","Type":"ContainerStarted","Data":"4b2193e5528dbeb1a5d4361157a8b78d2d4976e7a93f9ab30b0eca8d67023dd6"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.631668 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm" event={"ID":"df2558d5-6ce0-4fb0-b689-fc8682a89744","Type":"ContainerStarted","Data":"91f90a5dd02cfca8228333733c0bc7ebd1b5b450be77d463f90f5e0d9655ee4f"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.650974 4805 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jt4tv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.651063 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" podUID="58ff610a-1f7d-4e29-83c2-a95691a7dbc3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.653499 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s9j77" event={"ID":"184529ec-0250-4ed3-a1ef-4a5606202a85","Type":"ContainerStarted","Data":"5f189ddcec9def031a4a5e832d6cd38e1401a8e024c80648ad7ea2a2f8abdd3a"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.662646 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:00 crc kubenswrapper[4805]: E1216 11:57:00.663261 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:01.163206559 +0000 UTC m=+94.881464384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.757257 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" event={"ID":"aaf86ef4-c14b-41b6-bd97-92a701b82562","Type":"ContainerStarted","Data":"66202b4fb39edf217eac372a927a46840933ac6e246f9ab5a300ad3b4baeae5f"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.777159 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:00 crc kubenswrapper[4805]: E1216 11:57:00.777848 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:01.277836452 +0000 UTC m=+94.996094257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.789097 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" event={"ID":"0b9eab42-29fd-4d17-8c0d-57eb48913cb7","Type":"ContainerStarted","Data":"0ba5490c6521e856807a8a9728d895889445e974a9e8171d6d2cc9854dce38b6"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.790703 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" event={"ID":"bc4281fb-7921-4bde-89f0-059e9b9935e2","Type":"ContainerStarted","Data":"cf5292dbf7e4ee3621d1ba2e2d0a00769e2df3b912a2186fab44ec66defd1bb5"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.791546 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" event={"ID":"e7893fbf-4354-4fc6-bc2a-fdadbdee1311","Type":"ContainerStarted","Data":"08ddda3a98485545330bee96d76a181999081919949f0f0af75f328629f75b7a"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.793179 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" event={"ID":"55e84bc6-30ab-46a1-8535-e9fa83eaaa5b","Type":"ContainerStarted","Data":"a5d7d326209daff0dfd026d35d5268646e077b3297934a4a5357a1d94ab6b890"} Dec 16 11:57:00 crc kubenswrapper[4805]: I1216 11:57:00.932347 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:00 crc kubenswrapper[4805]: E1216 11:57:00.932805 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:01.432788728 +0000 UTC m=+95.151046533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.101334 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:01 crc kubenswrapper[4805]: E1216 11:57:01.101670 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:01.601647275 +0000 UTC m=+95.319905080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.105392 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" podStartSLOduration=73.103752995 podStartE2EDuration="1m13.103752995s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:00.954919956 +0000 UTC m=+94.673177761" watchObservedRunningTime="2025-12-16 11:57:01.103752995 +0000 UTC m=+94.822010810" Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.228079 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:01 crc kubenswrapper[4805]: E1216 11:57:01.228338 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:01.728307545 +0000 UTC m=+95.446565360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.228480 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:01 crc kubenswrapper[4805]: E1216 11:57:01.229401 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:01.729381506 +0000 UTC m=+95.447639321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.256163 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5v6xf" podStartSLOduration=73.256119537 podStartE2EDuration="1m13.256119537s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:01.108989126 +0000 UTC m=+94.827246931" watchObservedRunningTime="2025-12-16 11:57:01.256119537 +0000 UTC m=+94.974377362" Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.330032 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:01 crc kubenswrapper[4805]: E1216 11:57:01.330488 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:01.830466059 +0000 UTC m=+95.548723864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.456433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:01 crc kubenswrapper[4805]: E1216 11:57:01.457248 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:01.957228653 +0000 UTC m=+95.675486458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.560462 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:01 crc kubenswrapper[4805]: E1216 11:57:01.561334 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.061301621 +0000 UTC m=+95.779559426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.661981 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:01 crc kubenswrapper[4805]: E1216 11:57:01.662323 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.162310432 +0000 UTC m=+95.880568237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.745443 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc"] Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.814893 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:01 crc kubenswrapper[4805]: E1216 11:57:01.819891 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.319844132 +0000 UTC m=+96.038101937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.861121 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9k8h7"] Dec 16 11:57:01 crc kubenswrapper[4805]: I1216 11:57:01.924313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:01 crc kubenswrapper[4805]: E1216 11:57:01.926872 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.426857517 +0000 UTC m=+96.145115322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.029209 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:02 crc kubenswrapper[4805]: E1216 11:57:02.029606 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.529588017 +0000 UTC m=+96.247845832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.136078 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:02 crc kubenswrapper[4805]: E1216 11:57:02.136581 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.636565311 +0000 UTC m=+96.354823116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.154770 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" event={"ID":"50e153c5-173b-4a28-a028-5ebf2ff22054","Type":"ContainerStarted","Data":"70bae7772df94f781bd784c764254b997e934776431facfe3a48c521647259a7"} Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.156985 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" event={"ID":"1d782003-59ce-4968-a30c-5f12f3a9bd4f","Type":"ContainerStarted","Data":"2f88336bbc285532c3b3a37422ae7533b9d4a6d82c1d9a8417acd201bb6e45b0"} Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.161329 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tkhnx" event={"ID":"ee60544b-9f19-48e5-84de-666fb4ca9110","Type":"ContainerStarted","Data":"4a77c64c9f4370ab834ceb4defcedd089c884f13419658db47cf4028dd82f804"} Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.174265 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.238712 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:02 crc kubenswrapper[4805]: E1216 11:57:02.239013 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.738960602 +0000 UTC m=+96.457218407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.239587 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:02 crc kubenswrapper[4805]: E1216 11:57:02.240471 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.740457475 +0000 UTC m=+96.458715280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.302540 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gz75j" podStartSLOduration=74.302512753 podStartE2EDuration="1m14.302512753s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:02.296664385 +0000 UTC m=+96.014922190" watchObservedRunningTime="2025-12-16 11:57:02.302512753 +0000 UTC m=+96.020770588" Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.344803 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:02 crc kubenswrapper[4805]: E1216 11:57:02.345076 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.845054479 +0000 UTC m=+96.563312294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.346186 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:02 crc kubenswrapper[4805]: E1216 11:57:02.348664 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.848641763 +0000 UTC m=+96.566899658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.386966 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sjff7"] Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.450509 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:02 crc kubenswrapper[4805]: E1216 11:57:02.450643 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.950619341 +0000 UTC m=+96.668877156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.450773 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:02 crc kubenswrapper[4805]: E1216 11:57:02.451179 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:02.951169027 +0000 UTC m=+96.669426832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.624934 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:02 crc kubenswrapper[4805]: E1216 11:57:02.625581 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:03.125564364 +0000 UTC m=+96.843822179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.779307 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:02 crc kubenswrapper[4805]: E1216 11:57:02.779923 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:03.279900472 +0000 UTC m=+96.998158457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:02 crc kubenswrapper[4805]: I1216 11:57:02.887372 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:02 crc kubenswrapper[4805]: E1216 11:57:02.893101 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:03.393064893 +0000 UTC m=+97.111322708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.021867 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:03 crc kubenswrapper[4805]: E1216 11:57:03.022396 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:03.52237148 +0000 UTC m=+97.240629285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.164335 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:03 crc kubenswrapper[4805]: E1216 11:57:03.168854 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:03.668816841 +0000 UTC m=+97.387074796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.316198 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:03 crc kubenswrapper[4805]: E1216 11:57:03.316842 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:03.816820386 +0000 UTC m=+97.535078191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.797713 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" event={"ID":"5a0ad740-271d-452e-b8be-d5e190a46719","Type":"ContainerStarted","Data":"e670aa28fa86068ada8772ceff70ecb9d08d7cc19845a9890e53de8aafc745b0"} Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.812446 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.824928 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dxhld" event={"ID":"69858fb6-30f7-4b9b-a240-c95b4aa2de5a","Type":"ContainerStarted","Data":"0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47"} Dec 16 11:57:03 crc kubenswrapper[4805]: E1216 11:57:03.831213 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:04.331100978 +0000 UTC m=+98.049358793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.847556 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" event={"ID":"ce9b2f10-5fc2-4784-9c34-5fc3ed544115","Type":"ContainerStarted","Data":"c21cc060b0903e4df6b43783302e0463ad4710a4018e8d3ad33e5a2a04cf5277"} Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.915681 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:03 crc kubenswrapper[4805]: E1216 11:57:03.917081 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:04.417064005 +0000 UTC m=+98.135321810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.935447 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s9j77" event={"ID":"184529ec-0250-4ed3-a1ef-4a5606202a85","Type":"ContainerStarted","Data":"728e9d4544342492fc4c0c031cf69fdf13f00d0bd2a0e58e4b38af871c74cb9f"} Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.935515 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-s9j77" Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.939883 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.939949 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.940739 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" event={"ID":"25bbef6b-4746-41e9-83ef-20e9c54a7451","Type":"ContainerStarted","Data":"c16e1f1d57f0128b31694f27a14baf22fa2385d9235738fb65d5e86917ada879"} Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.951549 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" event={"ID":"b1679f25-d297-4683-b2e6-3657feb872eb","Type":"ContainerStarted","Data":"bbe9976a59798b85e892233b80c25595ab0b44f101c6c208e86b4d41637b5968"} Dec 16 11:57:03 crc kubenswrapper[4805]: I1216 11:57:03.954697 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-lzhxz" podStartSLOduration=75.9546814 podStartE2EDuration="1m15.9546814s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:03.953852896 +0000 UTC m=+97.672110711" watchObservedRunningTime="2025-12-16 11:57:03.9546814 +0000 UTC m=+97.672939215" Dec 16 11:57:04 crc kubenswrapper[4805]: I1216 11:57:04.023626 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:04 crc kubenswrapper[4805]: I1216 11:57:04.024996 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-s9j77" podStartSLOduration=76.024962575 podStartE2EDuration="1m16.024962575s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:04.015289046 +0000 UTC m=+97.733546861" watchObservedRunningTime="2025-12-16 11:57:04.024962575 +0000 UTC m=+97.743220400" Dec 16 11:57:04 crc kubenswrapper[4805]: E1216 11:57:04.025470 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:04.525275964 +0000 UTC m=+98.243533769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:04 crc kubenswrapper[4805]: I1216 11:57:04.075959 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-dxhld" podStartSLOduration=76.075938684 podStartE2EDuration="1m16.075938684s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:04.067972005 +0000 UTC m=+97.786229820" watchObservedRunningTime="2025-12-16 11:57:04.075938684 +0000 UTC m=+97.794196489" Dec 16 11:57:04 crc kubenswrapper[4805]: I1216 11:57:04.185754 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:04 crc kubenswrapper[4805]: E1216 11:57:04.186510 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:04.686314806 +0000 UTC m=+98.404572611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:04 crc kubenswrapper[4805]: I1216 11:57:04.289128 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:04 crc kubenswrapper[4805]: E1216 11:57:04.289691 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:04.789674974 +0000 UTC m=+98.507932779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:04 crc kubenswrapper[4805]: I1216 11:57:04.417025 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:04 crc kubenswrapper[4805]: E1216 11:57:04.422068 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:04.92204663 +0000 UTC m=+98.640304435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:04 crc kubenswrapper[4805]: I1216 11:57:04.583779 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:04 crc kubenswrapper[4805]: E1216 11:57:04.584251 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:05.084213013 +0000 UTC m=+98.802470818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:04 crc kubenswrapper[4805]: I1216 11:57:04.696064 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:04 crc kubenswrapper[4805]: E1216 11:57:04.696993 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:05.196960563 +0000 UTC m=+98.915218368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:04 crc kubenswrapper[4805]: I1216 11:57:04.813850 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j"] Dec 16 11:57:04 crc kubenswrapper[4805]: I1216 11:57:04.837119 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:04 crc kubenswrapper[4805]: E1216 11:57:04.837518 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:05.337485353 +0000 UTC m=+99.055743158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:04 crc kubenswrapper[4805]: I1216 11:57:04.929949 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z69nw"] Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:04.938427 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:05 crc kubenswrapper[4805]: E1216 11:57:04.938908 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:05.438891745 +0000 UTC m=+99.157149550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:04.988783 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9"] Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.010614 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" event={"ID":"aaf86ef4-c14b-41b6-bd97-92a701b82562","Type":"ContainerStarted","Data":"b7ef589a389fcc9c536f622154b10464a764de3a5f032ac68a0637a7ca42ead4"} Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.021716 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" event={"ID":"a176db58-5c9e-4446-bccf-01eabe6612d2","Type":"ContainerStarted","Data":"fe2eefe19daaacce1fd5c74ca1355526cd519556952786644663cf2bdd8325de"} Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.047165 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:05 crc kubenswrapper[4805]: E1216 11:57:05.048689 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:05.548661789 +0000 UTC m=+99.266919604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.060748 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" event={"ID":"2377e2e1-2de4-46e9-a0a9-768f5dd52317","Type":"ContainerStarted","Data":"8f71faa81940dad83a3b3776c270549d59222b77bed79e7bacf1db218effb907"} Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.061536 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.075878 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2c94r" podStartSLOduration=77.075857323 podStartE2EDuration="1m17.075857323s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:05.074967057 +0000 UTC m=+98.793224862" watchObservedRunningTime="2025-12-16 11:57:05.075857323 +0000 UTC m=+98.794115138" Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.101238 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" event={"ID":"9a656193-7884-4e3d-8a17-4ff680c4a116","Type":"ContainerStarted","Data":"30bc178842fb6d356dbd098610eb80be5dc4c088c1eae233be70524a4329c68e"} Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.144639 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" event={"ID":"e7893fbf-4354-4fc6-bc2a-fdadbdee1311","Type":"ContainerStarted","Data":"c823205aeda05b758ed54dca9b05804851ce714cad440f8159439ab876d4f162"} Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.145728 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" event={"ID":"be38a4fa-fb1e-4f8d-9887-b4232ed7c087","Type":"ContainerStarted","Data":"b12b3847fc7732d7c934ebd04c80ca6f9c16eed9086fa2a6d0adbfa178a15210"} Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.170742 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd"] Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.173241 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:05 crc kubenswrapper[4805]: E1216 11:57:05.175818 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:05.675771521 +0000 UTC m=+99.394029326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.305285 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:05 crc kubenswrapper[4805]: E1216 11:57:05.305934 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:05.805919152 +0000 UTC m=+99.524176947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.315156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" event={"ID":"7f262dc5-9bae-450c-ab81-3172ba82e700","Type":"ContainerStarted","Data":"cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69"} Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.316189 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.317165 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podStartSLOduration=77.317122585 podStartE2EDuration="1m17.317122585s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:05.314546781 +0000 UTC m=+99.032804596" watchObservedRunningTime="2025-12-16 11:57:05.317122585 +0000 UTC m=+99.035380410" Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.321671 4805 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nq4vq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.321744 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" podUID="7f262dc5-9bae-450c-ab81-3172ba82e700" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.340212 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-j4rx5" event={"ID":"1d60eaf3-b804-48a1-bc67-710f8ce5805f","Type":"ContainerStarted","Data":"febf0cd2ead58d97a1f25c51b3279e5991e7a191ed3f483b5dee6f568ecb650c"} Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.340389 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-q7lmj" podStartSLOduration=77.340376525 podStartE2EDuration="1m17.340376525s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:05.339279484 +0000 UTC m=+99.057537289" watchObservedRunningTime="2025-12-16 11:57:05.340376525 +0000 UTC m=+99.058634330" Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.356840 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.356928 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.443817 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:05 crc kubenswrapper[4805]: E1216 11:57:05.450683 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:05.950663264 +0000 UTC m=+99.668921079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.511532 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.512630 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qpz4c" podStartSLOduration=77.512612299 podStartE2EDuration="1m17.512612299s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:05.50917629 +0000 UTC m=+99.227434115" watchObservedRunningTime="2025-12-16 11:57:05.512612299 +0000 UTC m=+99.230870114" Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.545396 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:05 crc kubenswrapper[4805]: E1216 11:57:05.546388 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:06.046368272 +0000 UTC m=+99.764626077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.552207 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2lp7s"] Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.585955 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn"] Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.595331 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.595639 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.596043 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc"] Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.607967 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-j4rx5" podStartSLOduration=77.607927526 podStartE2EDuration="1m17.607927526s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:05.604867758 +0000 UTC m=+99.323125563" watchObservedRunningTime="2025-12-16 11:57:05.607927526 +0000 UTC m=+99.326185351" Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.713760 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:05 crc kubenswrapper[4805]: E1216 11:57:05.715062 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:06.215040213 +0000 UTC m=+99.933298018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:05 crc kubenswrapper[4805]: I1216 11:57:05.761976 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6ndp4"] Dec 16 11:57:06 crc kubenswrapper[4805]: I1216 11:57:06.062076 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:06 crc kubenswrapper[4805]: E1216 11:57:06.062528 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:06.562508807 +0000 UTC m=+100.280766612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:06 crc kubenswrapper[4805]: I1216 11:57:06.164060 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:06 crc kubenswrapper[4805]: E1216 11:57:06.164675 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:06.664656212 +0000 UTC m=+100.382914017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:06 crc kubenswrapper[4805]: I1216 11:57:06.183287 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" podStartSLOduration=78.183261628 podStartE2EDuration="1m18.183261628s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:05.807480298 +0000 UTC m=+99.525738103" watchObservedRunningTime="2025-12-16 11:57:06.183261628 +0000 UTC m=+99.901519443" Dec 16 11:57:06 crc kubenswrapper[4805]: I1216 11:57:06.188244 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m"] Dec 16 11:57:06 crc kubenswrapper[4805]: I1216 11:57:06.384764 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:06 crc kubenswrapper[4805]: E1216 11:57:06.385306 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:06.88528625 +0000 UTC m=+100.603544055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.096741 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.134276 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.097815 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:07 crc kubenswrapper[4805]: E1216 11:57:07.098136 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:08.098090181 +0000 UTC m=+101.816348016 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.135711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.135915 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:07 crc kubenswrapper[4805]: E1216 11:57:07.136431 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:07.636416296 +0000 UTC m=+101.354674101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.170927 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cc0dc93-5341-4b1e-840d-2c0c951ff142-metrics-certs\") pod \"network-metrics-daemon-ct6d8\" (UID: \"2cc0dc93-5341-4b1e-840d-2c0c951ff142\") " pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.237153 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:07 crc kubenswrapper[4805]: E1216 11:57:07.237285 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:07.737257895 +0000 UTC m=+101.455515700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.237509 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:07 crc kubenswrapper[4805]: E1216 11:57:07.237923 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:07.737910954 +0000 UTC m=+101.456168759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.249595 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z69nw" event={"ID":"7306c7ab-58c7-4034-a406-ec666f9a70b9","Type":"ContainerStarted","Data":"cb0f66a3f2dafed09abd4f9c7394de12d97a988edf4c257e1d0ec541ad78125a"} Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.269060 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2lp7s" event={"ID":"d7f81a73-07df-4bbe-ac9b-33897678a7ec","Type":"ContainerStarted","Data":"42782631baa8b189551ed59801e43af1a8c6570448d906d4b17b3c490444a204"} Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.277961 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j" event={"ID":"54c6a431-7172-4e4c-b283-24c980b51826","Type":"ContainerStarted","Data":"2af85372865273613949764ec1a5cba50f802a6e3dd49ee61f354b1870686f5c"} Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.312291 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" event={"ID":"24e933f0-876c-4bf3-87dd-a2c6f7c69a6e","Type":"ContainerStarted","Data":"1be28a816660b294b8105a726b44cffc50e05119fd87ec57f39f80275d8953a1"} Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.341392 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ct6d8" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.455328 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" event={"ID":"4e56b71e-6bab-4c7c-9d42-c20607af7311","Type":"ContainerStarted","Data":"b04020bac08590dcb8d17ac5c2f290581d6f2171807f88c83cf180356803882b"} Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.456311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:07 crc kubenswrapper[4805]: E1216 11:57:07.457719 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:07.957694124 +0000 UTC m=+101.675951939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:07 crc kubenswrapper[4805]: E1216 11:57:07.460004 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:07.95998445 +0000 UTC m=+101.678242255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.458239 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.489931 4805 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nq4vq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.489990 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" podUID="7f262dc5-9bae-450c-ab81-3172ba82e700" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.527167 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.527218 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.546317 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" event={"ID":"796d11e4-b3f6-46b1-ac93-5f7738a3cd28","Type":"ContainerStarted","Data":"87a78023f2bd2ba63cf24bb4e2be29904d9b78c28c104a42edd973ed8ec20462"} Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.576625 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" event={"ID":"424e23bf-3841-48b3-80c7-b1d73d8110c7","Type":"ContainerStarted","Data":"54db3c4ff65eaf0ba224538558943545886c339da9230f25863cbed8247b8cff"} Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.592025 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:07 crc kubenswrapper[4805]: E1216 11:57:07.592537 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:08.092517874 +0000 UTC m=+101.810775679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.612559 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tkhnx" event={"ID":"ee60544b-9f19-48e5-84de-666fb4ca9110","Type":"ContainerStarted","Data":"173bfff68bec3e7f8cb2da65802bb88a7190df8fe6fc5ad0b0987666ecc7a89b"} Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.613247 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.613447 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.672238 4805 patch_prober.go:28] interesting pod/console-f9d7485db-dxhld container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.672304 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dxhld" podUID="69858fb6-30f7-4b9b-a240-c95b4aa2de5a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.733093 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:07 crc kubenswrapper[4805]: E1216 11:57:07.778949 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:08.278911491 +0000 UTC m=+101.997169296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.794251 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" event={"ID":"98881921-2262-4a91-b9af-6d1d5207963f","Type":"ContainerStarted","Data":"48aea8521fe293eaeee7c659147688bcc6d03da7fc9cda9b88075c95957e952b"} Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.830282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" event={"ID":"01fa8b28-cdb8-4c39-917e-af29958ec710","Type":"ContainerStarted","Data":"d1dc40c36e8fc7bd7c45382e08797fa901eeaa9878a9f5e8d920c031c955f398"} Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.832984 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" event={"ID":"a700da85-9bcf-4223-aaed-e9baa489beab","Type":"ContainerStarted","Data":"90cf7162abb1ed57d50a0497e568ade32d85f2306b86c4e2ba0f1dd8fa10e022"} Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.834297 4805 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nq4vq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.834372 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" podUID="7f262dc5-9bae-450c-ab81-3172ba82e700" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.842702 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:07 crc kubenswrapper[4805]: E1216 11:57:07.843239 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:08.343219626 +0000 UTC m=+102.061477431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.923243 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-prkth container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.923307 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podUID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.944655 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:07 crc kubenswrapper[4805]: E1216 11:57:07.952569 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:08.45254828 +0000 UTC m=+102.170806085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:07 crc kubenswrapper[4805]: I1216 11:57:07.989899 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.066806 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:08 crc kubenswrapper[4805]: E1216 11:57:08.066961 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:08.566935816 +0000 UTC m=+102.285193621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.067656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:08 crc kubenswrapper[4805]: E1216 11:57:08.071840 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:08.571817587 +0000 UTC m=+102.290075602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.074831 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" podStartSLOduration=80.074796693 podStartE2EDuration="1m20.074796693s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:08.066469613 +0000 UTC m=+101.784727428" watchObservedRunningTime="2025-12-16 11:57:08.074796693 +0000 UTC m=+101.793054518" Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.177879 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:08 crc kubenswrapper[4805]: E1216 11:57:08.179477 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:08.679460849 +0000 UTC m=+102.397718654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.180886 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.180924 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.180986 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.181003 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.282479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:08 crc kubenswrapper[4805]: E1216 11:57:08.283008 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:08.782994793 +0000 UTC m=+102.501252598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.297592 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tkhnx" podStartSLOduration=13.297574173 podStartE2EDuration="13.297574173s" podCreationTimestamp="2025-12-16 11:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:08.29710189 +0000 UTC m=+102.015359695" watchObservedRunningTime="2025-12-16 11:57:08.297574173 +0000 UTC m=+102.015831988" Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.298107 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smtmw" podStartSLOduration=80.298099749 podStartE2EDuration="1m20.298099749s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:08.252505385 +0000 UTC m=+101.970763200" watchObservedRunningTime="2025-12-16 11:57:08.298099749 +0000 UTC m=+102.016357564" Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.388386 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:08 crc kubenswrapper[4805]: E1216 11:57:08.389358 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:08.889338258 +0000 UTC m=+102.607596063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.497434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:08 crc kubenswrapper[4805]: E1216 11:57:08.497978 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:08.997958129 +0000 UTC m=+102.716215934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.596909 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:08 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:08 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:08 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.597678 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.599815 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:08 crc kubenswrapper[4805]: E1216 11:57:08.600471 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:09.100451893 +0000 UTC m=+102.818709698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.735298 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:08 crc kubenswrapper[4805]: E1216 11:57:08.737388 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:09.237372228 +0000 UTC m=+102.955630033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:08 crc kubenswrapper[4805]: I1216 11:57:08.738900 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.005215 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:09 crc kubenswrapper[4805]: E1216 11:57:09.006039 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:09.50601075 +0000 UTC m=+103.224268555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.138978 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:09 crc kubenswrapper[4805]: E1216 11:57:09.139460 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:09.639448716 +0000 UTC m=+103.357706521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.267322 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:09 crc kubenswrapper[4805]: E1216 11:57:09.268303 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:09.768287017 +0000 UTC m=+103.486544822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.334604 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-prkth container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.334932 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podUID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.334723 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-prkth container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.334995 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podUID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.381054 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:09 crc kubenswrapper[4805]: E1216 11:57:09.381474 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:09.881461511 +0000 UTC m=+103.599719316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.405449 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-28rcg" event={"ID":"2d03a289-7b6f-4264-83cc-fe33b26d5a2c","Type":"ContainerStarted","Data":"8a793e8c1a98fde5327f49dd658a57545476295baaecfc33dfd0503178de172a"} Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.406251 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.468984 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-28rcg" podStartSLOduration=81.468963253 podStartE2EDuration="1m21.468963253s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:09.427720254 +0000 UTC m=+103.145978069" watchObservedRunningTime="2025-12-16 11:57:09.468963253 +0000 UTC m=+103.187221078" Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.485689 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:09 crc kubenswrapper[4805]: E1216 11:57:09.486130 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:09.986112817 +0000 UTC m=+103.704370622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.604074 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:09 crc kubenswrapper[4805]: E1216 11:57:09.607025 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:10.107008071 +0000 UTC m=+103.825266076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.675663 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:09 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:09 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:09 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.675733 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.748780 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:09 crc kubenswrapper[4805]: E1216 11:57:09.749108 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:10.249090696 +0000 UTC m=+103.967348501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.754312 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-28rcg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.754374 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-28rcg" podUID="2d03a289-7b6f-4264-83cc-fe33b26d5a2c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.756792 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" event={"ID":"bc4281fb-7921-4bde-89f0-059e9b9935e2","Type":"ContainerStarted","Data":"7ef61b1865885c29614521b33d1390c47b1c82af3144ac6fc113c9c99738603a"} Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.847185 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" event={"ID":"06647187-e41a-4641-9fab-7604122b8f0a","Type":"ContainerStarted","Data":"03771c57a505f588579b3a6273ddb4499923d5a96aefc1c49bc6cd93c3baf92b"} Dec 16 11:57:09 crc kubenswrapper[4805]: I1216 11:57:09.851114 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:09 crc kubenswrapper[4805]: E1216 11:57:09.851595 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:10.3515792 +0000 UTC m=+104.069837005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.011621 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" event={"ID":"832ba633-f44c-4aa1-8791-65656ed2a744","Type":"ContainerStarted","Data":"a55a68464c3d51078055096d330283b75e84d74dfda78385ce64b8a1eb75c76c"} Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.012209 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lfcx" podStartSLOduration=82.012189449 podStartE2EDuration="1m22.012189449s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:10.010324935 +0000 UTC m=+103.728582750" watchObservedRunningTime="2025-12-16 11:57:10.012189449 +0000 UTC m=+103.730447264" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.013019 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.013653 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:10 crc kubenswrapper[4805]: E1216 11:57:10.014894 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:10.514876826 +0000 UTC m=+104.233134631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.023715 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" event={"ID":"9e7c545a-021d-4b0a-a781-fd4f68258fa3","Type":"ContainerStarted","Data":"2f9a1340b89e532384b1c3fe664a6b7a1d31548e4c367fc139dfcaa83370708c"} Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.133817 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:10 crc kubenswrapper[4805]: E1216 11:57:10.134245 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:10.634204745 +0000 UTC m=+104.352462550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.143185 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" event={"ID":"ce9b2f10-5fc2-4784-9c34-5fc3ed544115","Type":"ContainerStarted","Data":"2800e45a02b68fc9c957a021438bfc2a843fa13420d2c3618f2e56720135844b"} Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.150418 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" event={"ID":"b1679f25-d297-4683-b2e6-3657feb872eb","Type":"ContainerStarted","Data":"4f52d041a9ffd4b07c59f6fbafe9d86673807c4660701359b93f2f2c7eecbf27"} Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.206517 4805 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mwpbn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.206928 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" podUID="832ba633-f44c-4aa1-8791-65656ed2a744" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.321165 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.336662 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" podStartSLOduration=82.336632009 podStartE2EDuration="1m22.336632009s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:10.138963683 +0000 UTC m=+103.857221488" watchObservedRunningTime="2025-12-16 11:57:10.336632009 +0000 UTC m=+104.054889824" Dec 16 11:57:10 crc kubenswrapper[4805]: E1216 11:57:10.355587 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:10.855560505 +0000 UTC m=+104.573818310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.390459 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-r2dsr" event={"ID":"ae46fc1b-666a-498f-92f4-673cb535530e","Type":"ContainerStarted","Data":"428e7c4280ed96d3752cb2a8640b37876323a3b8cfb5c8010acfc7cace018b41"} Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.468991 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" podStartSLOduration=82.468968703 podStartE2EDuration="1m22.468968703s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:10.468282884 +0000 UTC m=+104.186540699" watchObservedRunningTime="2025-12-16 11:57:10.468968703 +0000 UTC m=+104.187226518" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.471191 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.472121 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" podStartSLOduration=82.472102134 podStartE2EDuration="1m22.472102134s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:10.33838631 +0000 UTC m=+104.056644115" watchObservedRunningTime="2025-12-16 11:57:10.472102134 +0000 UTC m=+104.190359959" Dec 16 11:57:10 crc kubenswrapper[4805]: E1216 11:57:10.476247 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:10.976226503 +0000 UTC m=+104.694484498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.503104 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm" event={"ID":"df2558d5-6ce0-4fb0-b689-fc8682a89744","Type":"ContainerStarted","Data":"59d8b785d6f482499fe35b51d5a27a81e0ec3ec4bed816d2effd15541075fbb5"} Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.505970 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9k8h7" podStartSLOduration=82.505950909 podStartE2EDuration="1m22.505950909s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:10.504406975 +0000 UTC m=+104.222664780" watchObservedRunningTime="2025-12-16 11:57:10.505950909 +0000 UTC m=+104.224208734" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.517016 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:10 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:10 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:10 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.517070 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.520450 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" event={"ID":"1d782003-59ce-4968-a30c-5f12f3a9bd4f","Type":"ContainerStarted","Data":"c1de3cab92de652041b039656ad9676fc50b68fc54b7ee440e075b02fe56a763"} Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.577468 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.578101 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-prkth container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.578136 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podUID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.592341 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:10 crc kubenswrapper[4805]: E1216 11:57:10.595517 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:11.09548619 +0000 UTC m=+104.813744005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.676345 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sjff7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.676428 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.698068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:10 crc kubenswrapper[4805]: E1216 11:57:10.698747 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:11.198722515 +0000 UTC m=+104.916980320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.739766 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" podStartSLOduration=82.739718846 podStartE2EDuration="1m22.739718846s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:10.591851765 +0000 UTC m=+104.310109580" watchObservedRunningTime="2025-12-16 11:57:10.739718846 +0000 UTC m=+104.457976661" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.758796 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jccm" podStartSLOduration=82.758765295 podStartE2EDuration="1m22.758765295s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:10.728112402 +0000 UTC m=+104.446370207" watchObservedRunningTime="2025-12-16 11:57:10.758765295 +0000 UTC m=+104.477023110" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.808053 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9d7fz" podStartSLOduration=82.808027515 podStartE2EDuration="1m22.808027515s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:10.80646074 +0000 UTC m=+104.524718555" watchObservedRunningTime="2025-12-16 11:57:10.808027515 +0000 UTC m=+104.526285330" Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.812311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:10 crc kubenswrapper[4805]: E1216 11:57:10.812695 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:11.312674839 +0000 UTC m=+105.030932644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:10 crc kubenswrapper[4805]: I1216 11:57:10.926121 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:10 crc kubenswrapper[4805]: E1216 11:57:10.926684 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:11.426665224 +0000 UTC m=+105.144923029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.031206 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:11 crc kubenswrapper[4805]: E1216 11:57:11.031456 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:11.531398633 +0000 UTC m=+105.249656438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.032134 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:11 crc kubenswrapper[4805]: E1216 11:57:11.032610 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:11.532593607 +0000 UTC m=+105.250851412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.135708 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:11 crc kubenswrapper[4805]: E1216 11:57:11.136107 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:11.63608755 +0000 UTC m=+105.354345355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.281011 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:11 crc kubenswrapper[4805]: E1216 11:57:11.281694 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:11.781676406 +0000 UTC m=+105.499934201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.288596 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" podStartSLOduration=83.288574215 podStartE2EDuration="1m23.288574215s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:10.93450672 +0000 UTC m=+104.652764525" watchObservedRunningTime="2025-12-16 11:57:11.288574215 +0000 UTC m=+105.006832030" Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.290027 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ct6d8"] Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.407886 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:11 crc kubenswrapper[4805]: E1216 11:57:11.409925 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:11.90984018 +0000 UTC m=+105.628097985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.522824 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:11 crc kubenswrapper[4805]: E1216 11:57:11.523230 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:12.023215157 +0000 UTC m=+105.741472962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.544483 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:11 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:11 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:11 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.544539 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.625538 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:11 crc kubenswrapper[4805]: E1216 11:57:11.626348 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:12.126328799 +0000 UTC m=+105.844586614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:11 crc kubenswrapper[4805]: W1216 11:57:11.626447 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cc0dc93_5341_4b1e_840d_2c0c951ff142.slice/crio-794a1f95a8fbeba1bc67aa5bd6eaca9932992e8faa0d56cadfe007b977b99683 WatchSource:0}: Error finding container 794a1f95a8fbeba1bc67aa5bd6eaca9932992e8faa0d56cadfe007b977b99683: Status 404 returned error can't find the container with id 794a1f95a8fbeba1bc67aa5bd6eaca9932992e8faa0d56cadfe007b977b99683 Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.649387 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" event={"ID":"eeb94cb2-1f1c-4547-93ab-895959638a88","Type":"ContainerStarted","Data":"123ade3345609ec0e8eb7f467eb38a587d736ba9ad1055be50a0d89f4a91f38d"} Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.667502 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" event={"ID":"55e84bc6-30ab-46a1-8535-e9fa83eaaa5b","Type":"ContainerStarted","Data":"baf8b5627992a9243adc7d86de31834496c158b32eeba1ec5f55b233229c20a2"} Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.690701 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" event={"ID":"a700da85-9bcf-4223-aaed-e9baa489beab","Type":"ContainerStarted","Data":"79f0267a56f71174f2d28a619c119dd48061dbeb2ccc4bdadbee45273d1cf9e4"} Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.759259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" event={"ID":"24e933f0-876c-4bf3-87dd-a2c6f7c69a6e","Type":"ContainerStarted","Data":"1ef5126984aedec74624ec5be842f5a288dde2faef04816042e7edd081b996ff"} Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.761112 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:11 crc kubenswrapper[4805]: E1216 11:57:11.764225 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:12.264205492 +0000 UTC m=+105.982463357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.783129 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" event={"ID":"aaf86ef4-c14b-41b6-bd97-92a701b82562","Type":"ContainerStarted","Data":"4fbb242df0e42ec0c71560723f73a43e82821f7db63e82ed5ba9b0737a7536bd"} Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.800096 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" event={"ID":"e7893fbf-4354-4fc6-bc2a-fdadbdee1311","Type":"ContainerStarted","Data":"932b14a5f39efad8ad1a46ec2cef38ea0f3bca9cec58e1267230e7fcb1b4ba41"} Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.822272 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" event={"ID":"dbccfd36-acd0-4302-befd-f032ebddb856","Type":"ContainerStarted","Data":"26d2d644443684b6ab63395e7be166049c636189f1cf5595d8df64d96f41ddff"} Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.837448 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" event={"ID":"5826956e-b8ea-4053-8987-ebe9f350c975","Type":"ContainerStarted","Data":"f5e446d644bbff88e3f603ea098a86645091156f98a8716d5137fff07aea4e17"} Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.865774 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:11 crc kubenswrapper[4805]: E1216 11:57:11.866455 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:12.366424918 +0000 UTC m=+106.084682723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.903402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qm6md" event={"ID":"7efbfb18-2b26-4c71-9ef4-e9b7966fe2ba","Type":"ContainerStarted","Data":"06d24fa14593741910212f8434977b47687c9f80db9cd8b9c7fc9d2b96cc9c23"} Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.931649 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fnx4n" podStartSLOduration=83.931631057 podStartE2EDuration="1m23.931631057s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:11.856514153 +0000 UTC m=+105.574771958" watchObservedRunningTime="2025-12-16 11:57:11.931631057 +0000 UTC m=+105.649888872" Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.932659 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rbcrd" podStartSLOduration=83.932651797 podStartE2EDuration="1m23.932651797s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:11.931253577 +0000 UTC m=+105.649511392" watchObservedRunningTime="2025-12-16 11:57:11.932651797 +0000 UTC m=+105.650909612" Dec 16 11:57:11 crc kubenswrapper[4805]: I1216 11:57:11.968173 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:11 crc kubenswrapper[4805]: E1216 11:57:11.970452 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:12.470438126 +0000 UTC m=+106.188695931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.021305 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z69nw" event={"ID":"7306c7ab-58c7-4034-a406-ec666f9a70b9","Type":"ContainerStarted","Data":"f0c03d1256d10194414dcd0bc4268250d507ee3f148fee716a825960d7caf242"} Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.034976 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k5chc" podStartSLOduration=84.034961286 podStartE2EDuration="1m24.034961286s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:12.033466423 +0000 UTC m=+105.751724238" watchObservedRunningTime="2025-12-16 11:57:12.034961286 +0000 UTC m=+105.753219101" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.081608 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" event={"ID":"bc4281fb-7921-4bde-89f0-059e9b9935e2","Type":"ContainerStarted","Data":"60e55b2f2354318f9e14f21bb0a42eae90bab1b6e93f2834466d73469c9879b8"} Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.082054 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:12 crc kubenswrapper[4805]: E1216 11:57:12.082510 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:12.582435134 +0000 UTC m=+106.300692939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.084901 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" event={"ID":"796d11e4-b3f6-46b1-ac93-5f7738a3cd28","Type":"ContainerStarted","Data":"556cefd95b84badc47d5d710842929d79801751c336e90153a5c98abce6d4b33"} Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.085469 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.086601 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2lp7s" event={"ID":"d7f81a73-07df-4bbe-ac9b-33897678a7ec","Type":"ContainerStarted","Data":"a665035f17e9c3c7b2f46b804c7c15748ead177bee958d35776940456046367a"} Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.095090 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zbcnq" podStartSLOduration=84.095075068 podStartE2EDuration="1m24.095075068s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:12.094348467 +0000 UTC m=+105.812606272" watchObservedRunningTime="2025-12-16 11:57:12.095075068 +0000 UTC m=+105.813332883" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.112446 4805 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-k5tfc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.112493 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" podUID="796d11e4-b3f6-46b1-ac93-5f7738a3cd28" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.151229 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" event={"ID":"25bbef6b-4746-41e9-83ef-20e9c54a7451","Type":"ContainerStarted","Data":"b41712d780d8adb4a1843e3f5a213406691aab1a9e575d76126bb28555213ae5"} Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.185875 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sjff7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.185937 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.187283 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:12 crc kubenswrapper[4805]: E1216 11:57:12.193621 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:12.693601668 +0000 UTC m=+106.411859473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.207154 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2lp7s" podStartSLOduration=17.207120188 podStartE2EDuration="17.207120188s" podCreationTimestamp="2025-12-16 11:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:12.185658379 +0000 UTC m=+105.903916184" watchObservedRunningTime="2025-12-16 11:57:12.207120188 +0000 UTC m=+105.925378003" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.209685 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" event={"ID":"01fa8b28-cdb8-4c39-917e-af29958ec710","Type":"ContainerStarted","Data":"9c5391b47afa545e66989e5b6d34cbfa194c6a2f9988927b01cb45203796016b"} Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.210833 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.222575 4805 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-stgk9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.222631 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" podUID="01fa8b28-cdb8-4c39-917e-af29958ec710" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.244252 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.244448 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.288328 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j" event={"ID":"54c6a431-7172-4e4c-b283-24c980b51826","Type":"ContainerStarted","Data":"e0f0177f695a20f62e5c7d746ab793aa790762d8803306b86b5373783ea17c43"} Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.294906 4805 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-hgj4j container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.294983 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" podUID="dbccfd36-acd0-4302-befd-f032ebddb856" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.300395 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:12 crc kubenswrapper[4805]: E1216 11:57:12.301703 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:12.801670012 +0000 UTC m=+106.519927817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.302000 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:12 crc kubenswrapper[4805]: E1216 11:57:12.308435 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:12.808412786 +0000 UTC m=+106.526670591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.339570 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" podStartSLOduration=84.339547963 podStartE2EDuration="1m24.339547963s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:12.33700446 +0000 UTC m=+106.055262265" watchObservedRunningTime="2025-12-16 11:57:12.339547963 +0000 UTC m=+106.057805778" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.502785 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x58q8" podStartSLOduration=84.502760187 podStartE2EDuration="1m24.502760187s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:12.263812921 +0000 UTC m=+105.982070736" watchObservedRunningTime="2025-12-16 11:57:12.502760187 +0000 UTC m=+106.221018002" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.507176 4805 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mwpbn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.507254 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" podUID="832ba633-f44c-4aa1-8791-65656ed2a744" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.511702 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.517650 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-28rcg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.517740 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-28rcg" podUID="2d03a289-7b6f-4264-83cc-fe33b26d5a2c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.519801 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" event={"ID":"98881921-2262-4a91-b9af-6d1d5207963f","Type":"ContainerStarted","Data":"67dcc49803a4312555b7b5d10b91f9bdc573cf0246e9e078e9c57102c580ebaf"} Dec 16 11:57:12 crc kubenswrapper[4805]: E1216 11:57:12.520496 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:13.020449477 +0000 UTC m=+106.738707282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.521835 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.543275 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tzztn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.543587 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" podUID="98881921-2262-4a91-b9af-6d1d5207963f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.544606 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" podStartSLOduration=84.544590983 podStartE2EDuration="1m24.544590983s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:12.542953515 +0000 UTC m=+106.261211320" watchObservedRunningTime="2025-12-16 11:57:12.544590983 +0000 UTC m=+106.262848798" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.545089 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:12 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:12 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:12 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.545110 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.615099 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:12 crc kubenswrapper[4805]: E1216 11:57:12.615694 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:13.115679111 +0000 UTC m=+106.833936926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.648194 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j" podStartSLOduration=84.648161858 podStartE2EDuration="1m24.648161858s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:12.644835562 +0000 UTC m=+106.363093367" watchObservedRunningTime="2025-12-16 11:57:12.648161858 +0000 UTC m=+106.366419673" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.716178 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:12 crc kubenswrapper[4805]: E1216 11:57:12.731942 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:13.231896731 +0000 UTC m=+106.950154536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.810809 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" podStartSLOduration=84.810789044 podStartE2EDuration="1m24.810789044s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:12.809969231 +0000 UTC m=+106.528227046" watchObservedRunningTime="2025-12-16 11:57:12.810789044 +0000 UTC m=+106.529046869" Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.836921 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:12 crc kubenswrapper[4805]: E1216 11:57:12.837498 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:13.337486614 +0000 UTC m=+107.055744419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.938477 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:12 crc kubenswrapper[4805]: E1216 11:57:12.938667 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:13.438636379 +0000 UTC m=+107.156894184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:12 crc kubenswrapper[4805]: I1216 11:57:12.939161 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:12 crc kubenswrapper[4805]: E1216 11:57:12.939854 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:13.439829774 +0000 UTC m=+107.158087589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.270865 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:13 crc kubenswrapper[4805]: E1216 11:57:13.272740 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:13.772691899 +0000 UTC m=+107.490949704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.330609 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-prkth container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.331260 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podUID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.331591 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-prkth container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded" start-of-body= Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.331693 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podUID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded" Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.372854 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:13 crc kubenswrapper[4805]: E1216 11:57:13.373286 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:13.873270221 +0000 UTC m=+107.591528026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.521590 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:13 crc kubenswrapper[4805]: E1216 11:57:13.522694 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:14.022586478 +0000 UTC m=+107.740844433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.533168 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:13 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:13 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:13 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.533271 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.623081 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:13 crc kubenswrapper[4805]: E1216 11:57:13.623554 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:14.123542761 +0000 UTC m=+107.841800566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.654700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" event={"ID":"55e84bc6-30ab-46a1-8535-e9fa83eaaa5b","Type":"ContainerStarted","Data":"b0b96d474fadad955d12128765ada4a05192d9651c237592c9248fc07dfa40f7"} Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.659824 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" event={"ID":"5826956e-b8ea-4053-8987-ebe9f350c975","Type":"ContainerStarted","Data":"dcf08eee8e24cce09f30478ab62b55b1d1f5876cb7a3d4d35aa0a6f421fbef35"} Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.662570 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" event={"ID":"4e56b71e-6bab-4c7c-9d42-c20607af7311","Type":"ContainerStarted","Data":"4a488b7404c3ce9c8fe0f59765522e285d61ad73ae9b07d8e2fde521fffb71db"} Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.668980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z69nw" event={"ID":"7306c7ab-58c7-4034-a406-ec666f9a70b9","Type":"ContainerStarted","Data":"5eb36d75cc7da15bc9ccb55c93908cbb4328e7247a62d1f6ed8138d4f1d3d846"} Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.669844 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z69nw" Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.673358 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" event={"ID":"9e7c545a-021d-4b0a-a781-fd4f68258fa3","Type":"ContainerStarted","Data":"558480932d4e7e466c5e1b744fe55cfd963327814229b7e55b96816609f56719"} Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.722715 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" event={"ID":"2cc0dc93-5341-4b1e-840d-2c0c951ff142","Type":"ContainerStarted","Data":"d6e3b961ef5b388c0d542798cdc3db0ea24bf508e4237dc6815ca9b3e8873191"} Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.722758 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" event={"ID":"2cc0dc93-5341-4b1e-840d-2c0c951ff142","Type":"ContainerStarted","Data":"794a1f95a8fbeba1bc67aa5bd6eaca9932992e8faa0d56cadfe007b977b99683"} Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.728608 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:13 crc kubenswrapper[4805]: E1216 11:57:13.730514 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:14.230495436 +0000 UTC m=+107.948753241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.750347 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gsh2j" event={"ID":"54c6a431-7172-4e4c-b283-24c980b51826","Type":"ContainerStarted","Data":"1bac9b793b12cb0368bf1c5d17af730161fee62c4d58ab848054a5b89c0465c7"} Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.750828 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tzztn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.750857 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" podUID="98881921-2262-4a91-b9af-6d1d5207963f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.751635 4805 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-stgk9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.751659 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" podUID="01fa8b28-cdb8-4c39-917e-af29958ec710" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.751942 4805 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-k5tfc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.751983 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" podUID="796d11e4-b3f6-46b1-ac93-5f7738a3cd28" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.752214 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-28rcg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.752232 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-28rcg" podUID="2d03a289-7b6f-4264-83cc-fe33b26d5a2c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.753413 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sjff7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.753438 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 16 11:57:13 crc kubenswrapper[4805]: I1216 11:57:13.832207 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:13 crc kubenswrapper[4805]: E1216 11:57:13.849973 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:14.349947232 +0000 UTC m=+108.068205037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:13.984234 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:14 crc kubenswrapper[4805]: E1216 11:57:13.984872 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:14.484854232 +0000 UTC m=+108.203112027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.002096 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kkf49" podStartSLOduration=86.002078778 podStartE2EDuration="1m26.002078778s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:13.787745938 +0000 UTC m=+107.506003743" watchObservedRunningTime="2025-12-16 11:57:14.002078778 +0000 UTC m=+107.720336593" Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.089445 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:14 crc kubenswrapper[4805]: E1216 11:57:14.090392 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:14.590366423 +0000 UTC m=+108.308624228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.134943 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-z69nw" podStartSLOduration=19.134905086 podStartE2EDuration="19.134905086s" podCreationTimestamp="2025-12-16 11:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:14.13467674 +0000 UTC m=+107.852934545" watchObservedRunningTime="2025-12-16 11:57:14.134905086 +0000 UTC m=+107.853162901" Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.138618 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" podStartSLOduration=86.138599853 podStartE2EDuration="1m26.138599853s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:14.004152118 +0000 UTC m=+107.722409913" watchObservedRunningTime="2025-12-16 11:57:14.138599853 +0000 UTC m=+107.856857668" Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.191293 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:14 crc kubenswrapper[4805]: E1216 11:57:14.191507 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:14.691459826 +0000 UTC m=+108.409717641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.192033 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:14 crc kubenswrapper[4805]: E1216 11:57:14.192513 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:14.692500917 +0000 UTC m=+108.410758852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.293422 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:14 crc kubenswrapper[4805]: E1216 11:57:14.293629 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:14.79359999 +0000 UTC m=+108.511857795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.294266 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:14 crc kubenswrapper[4805]: E1216 11:57:14.294811 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:14.794789504 +0000 UTC m=+108.513047309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.431118 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:14 crc kubenswrapper[4805]: E1216 11:57:14.431574 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:14.931554636 +0000 UTC m=+108.649812441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.534719 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.535235 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:14 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:14 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:14 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.536264 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:14 crc kubenswrapper[4805]: E1216 11:57:14.535534 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:15.035520873 +0000 UTC m=+108.753778678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.638373 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:14 crc kubenswrapper[4805]: E1216 11:57:14.638822 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:15.138797699 +0000 UTC m=+108.857055514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.740016 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:14 crc kubenswrapper[4805]: E1216 11:57:14.740555 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:15.240541101 +0000 UTC m=+108.958798906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.942653 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:14 crc kubenswrapper[4805]: E1216 11:57:14.943332 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:15.443314065 +0000 UTC m=+109.161571870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.943466 4805 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mwpbn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.949995 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" podUID="832ba633-f44c-4aa1-8791-65656ed2a744" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.982282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" event={"ID":"4e56b71e-6bab-4c7c-9d42-c20607af7311","Type":"ContainerStarted","Data":"e2666320a244fffb6f969cfb3b59bdf783a6d5e1d493c44e4d85bb03550995d7"} Dec 16 11:57:14 crc kubenswrapper[4805]: I1216 11:57:14.982380 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.054240 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:15 crc kubenswrapper[4805]: E1216 11:57:15.055008 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:15.554990464 +0000 UTC m=+109.273248279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.059470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" event={"ID":"9e7c545a-021d-4b0a-a781-fd4f68258fa3","Type":"ContainerStarted","Data":"8039096158c9e5965b204b7123840b5bc9afb93f30b768388c841e9ab3f7b64c"} Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.094707 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ct6d8" event={"ID":"2cc0dc93-5341-4b1e-840d-2c0c951ff142","Type":"ContainerStarted","Data":"1ec4c8c28ebb464ea07386b0b163a16be15900aebb31fee90dc492caf088351b"} Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.096459 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tzztn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.096499 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" podUID="98881921-2262-4a91-b9af-6d1d5207963f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.096890 4805 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-stgk9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.096918 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" podUID="01fa8b28-cdb8-4c39-917e-af29958ec710" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.110273 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" podStartSLOduration=87.110246367 podStartE2EDuration="1m27.110246367s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:15.053320316 +0000 UTC m=+108.771578141" watchObservedRunningTime="2025-12-16 11:57:15.110246367 +0000 UTC m=+108.828504182" Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.111500 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-6ndp4" podStartSLOduration=87.111490542 podStartE2EDuration="1m27.111490542s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:15.101869455 +0000 UTC m=+108.820127280" watchObservedRunningTime="2025-12-16 11:57:15.111490542 +0000 UTC m=+108.829748357" Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.213098 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:15 crc kubenswrapper[4805]: E1216 11:57:15.215331 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:15.715302314 +0000 UTC m=+109.433560129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.314870 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:15 crc kubenswrapper[4805]: E1216 11:57:15.315297 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:15.815285156 +0000 UTC m=+109.533542961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.416006 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:15 crc kubenswrapper[4805]: E1216 11:57:15.416355 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:15.916340248 +0000 UTC m=+109.634598053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.517379 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:15 crc kubenswrapper[4805]: E1216 11:57:15.517931 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:16.017909826 +0000 UTC m=+109.736167631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.518098 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:15 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:15 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:15 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.518183 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.631838 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:15 crc kubenswrapper[4805]: E1216 11:57:15.634990 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:16.134961128 +0000 UTC m=+109.853218953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:15 crc kubenswrapper[4805]: I1216 11:57:15.735894 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:15 crc kubenswrapper[4805]: E1216 11:57:15.736789 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:16.236759353 +0000 UTC m=+109.955017158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:16 crc kubenswrapper[4805]: I1216 11:57:16.098456 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:16 crc kubenswrapper[4805]: E1216 11:57:16.099508 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:16.599482656 +0000 UTC m=+110.317740461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:16 crc kubenswrapper[4805]: I1216 11:57:16.231200 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:16 crc kubenswrapper[4805]: E1216 11:57:16.231751 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:16.731730507 +0000 UTC m=+110.449988312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:16 crc kubenswrapper[4805]: I1216 11:57:16.247470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" event={"ID":"eeb94cb2-1f1c-4547-93ab-895959638a88","Type":"ContainerStarted","Data":"38aaf720f20f26e47b6664dc4077279e64cfa55844199e9594b057871bb7834c"} Dec 16 11:57:16 crc kubenswrapper[4805]: I1216 11:57:16.810203 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:16 crc kubenswrapper[4805]: E1216 11:57:16.811430 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:17.811411885 +0000 UTC m=+111.529669690 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:16 crc kubenswrapper[4805]: I1216 11:57:16.814207 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-prkth container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 16 11:57:16 crc kubenswrapper[4805]: I1216 11:57:16.814283 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podUID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 16 11:57:16 crc kubenswrapper[4805]: I1216 11:57:16.814539 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-prkth container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 16 11:57:16 crc kubenswrapper[4805]: I1216 11:57:16.814562 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podUID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 11:57:16 crc kubenswrapper[4805]: I1216 11:57:16.828224 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:16 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:16 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:16 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:16 crc kubenswrapper[4805]: I1216 11:57:16.828348 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:16.976103 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:17 crc kubenswrapper[4805]: E1216 11:57:16.979233 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:17.47920731 +0000 UTC m=+111.197465125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:16.987229 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:16.988037 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"8f71faa81940dad83a3b3776c270549d59222b77bed79e7bacf1db218effb907"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:16.988345 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podUID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerName="openshift-config-operator" containerID="cri-o://8f71faa81940dad83a3b3776c270549d59222b77bed79e7bacf1db218effb907" gracePeriod=30 Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.082108 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:17 crc kubenswrapper[4805]: E1216 11:57:17.082889 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:17.582873058 +0000 UTC m=+111.301130863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.128591 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.129019 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.184857 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:17 crc kubenswrapper[4805]: E1216 11:57:17.185827 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:17.685802044 +0000 UTC m=+111.404059849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.197058 4805 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4jnvf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.197359 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" podUID="5826956e-b8ea-4053-8987-ebe9f350c975" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.925835 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:17 crc kubenswrapper[4805]: E1216 11:57:17.927237 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:18.927215117 +0000 UTC m=+112.645472922 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.942831 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:17 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:17 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:17 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.942908 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.964872 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.966665 4805 patch_prober.go:28] interesting pod/console-f9d7485db-dxhld container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.966810 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dxhld" podUID="69858fb6-30f7-4b9b-a240-c95b4aa2de5a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.989397 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-prkth container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 16 11:57:17 crc kubenswrapper[4805]: I1216 11:57:17.989784 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podUID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.029578 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.031042 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:18.53102813 +0000 UTC m=+112.249285935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.130572 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.132310 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:18.632275702 +0000 UTC m=+112.350533517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.135224 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:18.635211447 +0000 UTC m=+112.353469252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.134357 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.137318 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.137385 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.137474 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.137499 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.140030 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-ct6d8" podStartSLOduration=90.139992015 podStartE2EDuration="1m30.139992015s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:15.233346564 +0000 UTC m=+108.951604369" watchObservedRunningTime="2025-12-16 11:57:18.139992015 +0000 UTC m=+111.858249830" Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.203505 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.241900 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.244664 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:18.744639291 +0000 UTC m=+112.462897126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.247387 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgj4j" Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.345497 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.345794 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:18.845782556 +0000 UTC m=+112.564040361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.446410 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.446741 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:18.946727535 +0000 UTC m=+112.664985340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.527545 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:18 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:18 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:18 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.527848 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.548799 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.549405 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:19.049382394 +0000 UTC m=+112.767640199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.660597 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.660704 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:19.160686882 +0000 UTC m=+112.878944697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.661017 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.661299 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:19.161292269 +0000 UTC m=+112.879550074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.761359 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.761771 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:19.261738954 +0000 UTC m=+112.979996759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.862536 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.862944 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:19.3629276 +0000 UTC m=+113.081185405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.897638 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sjff7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.897874 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.897712 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sjff7 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.898400 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.960414 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-28rcg container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.963607 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-28rcg" podUID="2d03a289-7b6f-4264-83cc-fe33b26d5a2c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.963485 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.962913 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-28rcg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.964044 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-28rcg" podUID="2d03a289-7b6f-4264-83cc-fe33b26d5a2c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.963544 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:19.463527479 +0000 UTC m=+113.181785284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:18 crc kubenswrapper[4805]: I1216 11:57:18.964470 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:18 crc kubenswrapper[4805]: E1216 11:57:18.964856 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:19.464846287 +0000 UTC m=+113.183104092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.055826 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" event={"ID":"eeb94cb2-1f1c-4547-93ab-895959638a88","Type":"ContainerStarted","Data":"0302ee640029a9183162614ac62ffd368025b7451df00f25b3b366b17cac13d8"} Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.074328 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:19 crc kubenswrapper[4805]: E1216 11:57:19.074770 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:19.574751975 +0000 UTC m=+113.293009780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.201853 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:19 crc kubenswrapper[4805]: E1216 11:57:19.202281 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:19.70226458 +0000 UTC m=+113.420522405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.303202 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:19 crc kubenswrapper[4805]: E1216 11:57:19.303894 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:19.803874809 +0000 UTC m=+113.522132614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.405021 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:19 crc kubenswrapper[4805]: E1216 11:57:19.405644 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:19.905631071 +0000 UTC m=+113.623888876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.506816 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:19 crc kubenswrapper[4805]: E1216 11:57:19.507503 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:20.007486546 +0000 UTC m=+113.725744351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.618523 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:19 crc kubenswrapper[4805]: E1216 11:57:19.619293 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:20.119275377 +0000 UTC m=+113.837533192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.649458 4805 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mwpbn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.649526 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" podUID="832ba633-f44c-4aa1-8791-65656ed2a744" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.649601 4805 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-prkth container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded" start-of-body= Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.649623 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" podUID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded" Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.668064 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:19 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:19 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:19 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.668148 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.686995 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5tfc" Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.720589 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-stgk9" Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.720599 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:19 crc kubenswrapper[4805]: E1216 11:57:19.722049 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:20.222031029 +0000 UTC m=+113.940288904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.895980 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:19 crc kubenswrapper[4805]: E1216 11:57:19.896684 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-16 11:57:20.396666892 +0000 UTC m=+114.114924697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sb7p7" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:19 crc kubenswrapper[4805]: I1216 11:57:19.919479 4805 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.031745 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tzztn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.031827 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" podUID="98881921-2262-4a91-b9af-6d1d5207963f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.044925 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tzztn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.045218 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" podUID="98881921-2262-4a91-b9af-6d1d5207963f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.046065 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.047321 4805 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-16T11:57:19.920124739Z","Handler":null,"Name":""} Dec 16 11:57:20 crc kubenswrapper[4805]: E1216 11:57:20.048532 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-16 11:57:20.548515599 +0000 UTC m=+114.266773404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.082390 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" event={"ID":"eeb94cb2-1f1c-4547-93ab-895959638a88","Type":"ContainerStarted","Data":"0edec6969bf9fd8bf8c71058deb038b4c284d1b0cf387a94d293c8408bdfb6ed"} Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.093788 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-z69nw" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.103359 4805 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.103399 4805 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.147705 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.346046 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" podUID="eeb94cb2-1f1c-4547-93ab-895959638a88" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.42:9898/healthz\": dial tcp 10.217.0.42:9898: connect: connection refused" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.357918 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.357954 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.516759 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:20 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:20 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:20 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.517000 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.655619 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.656906 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 16 11:57:20 crc kubenswrapper[4805]: W1216 11:57:20.701822 4805 reflector.go:561] object-"openshift-kube-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-apiserver": no relationship found between node 'crc' and this object Dec 16 11:57:20 crc kubenswrapper[4805]: E1216 11:57:20.701866 4805 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 16 11:57:20 crc kubenswrapper[4805]: W1216 11:57:20.701965 4805 reflector.go:561] object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n": failed to list *v1.Secret: secrets "installer-sa-dockercfg-5pr6n" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-apiserver": no relationship found between node 'crc' and this object Dec 16 11:57:20 crc kubenswrapper[4805]: E1216 11:57:20.701981 4805 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-5pr6n\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"installer-sa-dockercfg-5pr6n\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.730820 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.758653 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/157f6828-6a78-4597-b17c-3cbbafead745-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"157f6828-6a78-4597-b17c-3cbbafead745\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.758731 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/157f6828-6a78-4597-b17c-3cbbafead745-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"157f6828-6a78-4597-b17c-3cbbafead745\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.814878 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sb7p7\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.911093 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/157f6828-6a78-4597-b17c-3cbbafead745-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"157f6828-6a78-4597-b17c-3cbbafead745\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.911194 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/157f6828-6a78-4597-b17c-3cbbafead745-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"157f6828-6a78-4597-b17c-3cbbafead745\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 16 11:57:20 crc kubenswrapper[4805]: I1216 11:57:20.911306 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/157f6828-6a78-4597-b17c-3cbbafead745-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"157f6828-6a78-4597-b17c-3cbbafead745\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.059531 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.087188 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.096685 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.100462 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.101417 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-prkth_2377e2e1-2de4-46e9-a0a9-768f5dd52317/openshift-config-operator/0.log" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.106604 4805 generic.go:334] "Generic (PLEG): container finished" podID="2377e2e1-2de4-46e9-a0a9-768f5dd52317" containerID="8f71faa81940dad83a3b3776c270549d59222b77bed79e7bacf1db218effb907" exitCode=255 Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.107605 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" event={"ID":"2377e2e1-2de4-46e9-a0a9-768f5dd52317","Type":"ContainerDied","Data":"8f71faa81940dad83a3b3776c270549d59222b77bed79e7bacf1db218effb907"} Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.107637 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" event={"ID":"2377e2e1-2de4-46e9-a0a9-768f5dd52317","Type":"ContainerStarted","Data":"b3d8f901f82bd6e208bb15c21f8d4b2559eb1d26612fdd94d78d9010c41772fb"} Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.108155 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.170162 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.170531 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.175443 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0de69a3-3d67-4100-8d75-414bbbd2c52d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.175519 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0de69a3-3d67-4100-8d75-414bbbd2c52d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.229918 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.277057 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0de69a3-3d67-4100-8d75-414bbbd2c52d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.277123 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0de69a3-3d67-4100-8d75-414bbbd2c52d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.277256 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0de69a3-3d67-4100-8d75-414bbbd2c52d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.282472 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-x9qqr" podStartSLOduration=26.282455532 podStartE2EDuration="26.282455532s" podCreationTimestamp="2025-12-16 11:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:21.194578579 +0000 UTC m=+114.912836394" watchObservedRunningTime="2025-12-16 11:57:21.282455532 +0000 UTC m=+115.000713347" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.283245 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q9ppd"] Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.284451 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.310458 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mcqz5"] Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.311752 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.351705 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 16 11:57:21 crc kubenswrapper[4805]: W1216 11:57:21.354914 4805 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Dec 16 11:57:21 crc kubenswrapper[4805]: E1216 11:57:21.354961 4805 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.359739 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.377883 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-catalog-content\") pod \"community-operators-mcqz5\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.378011 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffz4k\" (UniqueName: \"kubernetes.io/projected/7aba55d3-6790-4f26-9663-63bdf0c9991e-kube-api-access-ffz4k\") pod \"certified-operators-q9ppd\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.378113 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-utilities\") pod \"community-operators-mcqz5\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.378213 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-catalog-content\") pod \"certified-operators-q9ppd\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.378297 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz9t2\" (UniqueName: \"kubernetes.io/projected/6c8a922f-a887-401f-9f22-18355a0a81d7-kube-api-access-lz9t2\") pod \"community-operators-mcqz5\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.378401 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-utilities\") pod \"certified-operators-q9ppd\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.455812 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jc8cv"] Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.456831 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.461217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0de69a3-3d67-4100-8d75-414bbbd2c52d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.463731 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.483200 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxgff\" (UniqueName: \"kubernetes.io/projected/127838d6-328d-46cb-b942-6aed7bfd5048-kube-api-access-hxgff\") pod \"certified-operators-jc8cv\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.483266 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-utilities\") pod \"certified-operators-jc8cv\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.483297 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-catalog-content\") pod \"community-operators-mcqz5\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.483356 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffz4k\" (UniqueName: \"kubernetes.io/projected/7aba55d3-6790-4f26-9663-63bdf0c9991e-kube-api-access-ffz4k\") pod \"certified-operators-q9ppd\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.483425 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-utilities\") pod \"community-operators-mcqz5\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.483444 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-catalog-content\") pod \"certified-operators-jc8cv\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.483463 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-catalog-content\") pod \"certified-operators-q9ppd\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.483481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9t2\" (UniqueName: \"kubernetes.io/projected/6c8a922f-a887-401f-9f22-18355a0a81d7-kube-api-access-lz9t2\") pod \"community-operators-mcqz5\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.483498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-utilities\") pod \"certified-operators-q9ppd\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.483954 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-utilities\") pod \"certified-operators-q9ppd\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.484295 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-catalog-content\") pod \"community-operators-mcqz5\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.484856 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-utilities\") pod \"community-operators-mcqz5\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.485168 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-catalog-content\") pod \"certified-operators-q9ppd\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.500827 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q9ppd"] Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.505243 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dc7vx"] Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.506088 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.515241 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:21 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:21 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:21 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.515300 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.584952 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxgff\" (UniqueName: \"kubernetes.io/projected/127838d6-328d-46cb-b942-6aed7bfd5048-kube-api-access-hxgff\") pod \"certified-operators-jc8cv\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.585009 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-utilities\") pod \"certified-operators-jc8cv\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.585075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-catalog-content\") pod \"community-operators-dc7vx\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.585255 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-catalog-content\") pod \"certified-operators-jc8cv\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.585277 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-utilities\") pod \"community-operators-dc7vx\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.585335 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbv75\" (UniqueName: \"kubernetes.io/projected/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-kube-api-access-gbv75\") pod \"community-operators-dc7vx\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.586036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-utilities\") pod \"certified-operators-jc8cv\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.586336 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-catalog-content\") pod \"certified-operators-jc8cv\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.588224 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mcqz5"] Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.596202 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffz4k\" (UniqueName: \"kubernetes.io/projected/7aba55d3-6790-4f26-9663-63bdf0c9991e-kube-api-access-ffz4k\") pod \"certified-operators-q9ppd\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.609831 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jc8cv"] Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.609898 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dc7vx"] Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.610461 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz9t2\" (UniqueName: \"kubernetes.io/projected/6c8a922f-a887-401f-9f22-18355a0a81d7-kube-api-access-lz9t2\") pod \"community-operators-mcqz5\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.664453 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.688035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-catalog-content\") pod \"community-operators-dc7vx\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.688132 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-utilities\") pod \"community-operators-dc7vx\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.688206 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbv75\" (UniqueName: \"kubernetes.io/projected/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-kube-api-access-gbv75\") pod \"community-operators-dc7vx\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.689256 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-catalog-content\") pod \"community-operators-dc7vx\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.689676 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-utilities\") pod \"community-operators-dc7vx\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.720799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxgff\" (UniqueName: \"kubernetes.io/projected/127838d6-328d-46cb-b942-6aed7bfd5048-kube-api-access-hxgff\") pod \"certified-operators-jc8cv\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.755985 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbv75\" (UniqueName: \"kubernetes.io/projected/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-kube-api-access-gbv75\") pod \"community-operators-dc7vx\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.762289 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.831375 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:57:21 crc kubenswrapper[4805]: I1216 11:57:21.916321 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 16 11:57:22 crc kubenswrapper[4805]: I1216 11:57:22.025264 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/157f6828-6a78-4597-b17c-3cbbafead745-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"157f6828-6a78-4597-b17c-3cbbafead745\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 16 11:57:22 crc kubenswrapper[4805]: I1216 11:57:22.174551 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 16 11:57:22 crc kubenswrapper[4805]: I1216 11:57:22.512914 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:22 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:22 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:22 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:22 crc kubenswrapper[4805]: I1216 11:57:22.513216 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:22 crc kubenswrapper[4805]: I1216 11:57:22.537012 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 16 11:57:22 crc kubenswrapper[4805]: I1216 11:57:22.605401 4805 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/certified-operators-q9ppd" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 16 11:57:22 crc kubenswrapper[4805]: I1216 11:57:22.605507 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:57:22 crc kubenswrapper[4805]: I1216 11:57:22.820597 4805 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/certified-operators-jc8cv" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 16 11:57:22 crc kubenswrapper[4805]: I1216 11:57:22.820673 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.040068 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.513315 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:23 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:23 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:23 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.513676 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.536344 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4fcgh"] Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.537440 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:57:23 crc kubenswrapper[4805]: W1216 11:57:23.580856 4805 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: secrets "redhat-operators-dockercfg-ct8rh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Dec 16 11:57:23 crc kubenswrapper[4805]: E1216 11:57:23.580920 4805 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-operators-dockercfg-ct8rh\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.627286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-catalog-content\") pod \"redhat-operators-4fcgh\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.627733 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-265tc\" (UniqueName: \"kubernetes.io/projected/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-kube-api-access-265tc\") pod \"redhat-operators-4fcgh\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.627874 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-utilities\") pod \"redhat-operators-4fcgh\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.774798 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-catalog-content\") pod \"redhat-operators-4fcgh\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.774884 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-265tc\" (UniqueName: \"kubernetes.io/projected/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-kube-api-access-265tc\") pod \"redhat-operators-4fcgh\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.774923 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-utilities\") pod \"redhat-operators-4fcgh\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.775646 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-utilities\") pod \"redhat-operators-4fcgh\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.775659 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-catalog-content\") pod \"redhat-operators-4fcgh\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.780263 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8sb75"] Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.782077 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:57:23 crc kubenswrapper[4805]: I1216 11:57:23.792510 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fcgh"] Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.060349 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8sb75"] Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.092167 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-catalog-content\") pod \"redhat-operators-8sb75\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.092489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-utilities\") pod \"redhat-operators-8sb75\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.092692 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm5p8\" (UniqueName: \"kubernetes.io/projected/cf4f3221-afad-4a1d-8471-8018c2f08ddc-kube-api-access-lm5p8\") pod \"redhat-operators-8sb75\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.193821 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm5p8\" (UniqueName: \"kubernetes.io/projected/cf4f3221-afad-4a1d-8471-8018c2f08ddc-kube-api-access-lm5p8\") pod \"redhat-operators-8sb75\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.194199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-catalog-content\") pod \"redhat-operators-8sb75\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.194231 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-utilities\") pod \"redhat-operators-8sb75\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.194819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-265tc\" (UniqueName: \"kubernetes.io/projected/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-kube-api-access-265tc\") pod \"redhat-operators-4fcgh\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.195249 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-utilities\") pod \"redhat-operators-8sb75\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.195369 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-catalog-content\") pod \"redhat-operators-8sb75\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.284257 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sb7p7"] Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.312644 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l6h6x"] Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.314188 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.323551 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.342693 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm5p8\" (UniqueName: \"kubernetes.io/projected/cf4f3221-afad-4a1d-8471-8018c2f08ddc-kube-api-access-lm5p8\") pod \"redhat-operators-8sb75\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.356753 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc2l"] Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.358693 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.393777 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6h6x"] Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.399639 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bw8g\" (UniqueName: \"kubernetes.io/projected/3ed6f24b-f15e-4540-8fd2-1800a086a69e-kube-api-access-5bw8g\") pod \"redhat-marketplace-lhc2l\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.399690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jwnj\" (UniqueName: \"kubernetes.io/projected/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-kube-api-access-5jwnj\") pod \"redhat-marketplace-l6h6x\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.399732 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-catalog-content\") pod \"redhat-marketplace-l6h6x\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.399761 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-utilities\") pod \"redhat-marketplace-lhc2l\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.399779 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-catalog-content\") pod \"redhat-marketplace-lhc2l\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.399848 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-utilities\") pod \"redhat-marketplace-l6h6x\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.501527 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc2l"] Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.502972 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-catalog-content\") pod \"redhat-marketplace-l6h6x\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.503023 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-utilities\") pod \"redhat-marketplace-lhc2l\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.503045 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-catalog-content\") pod \"redhat-marketplace-lhc2l\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.503103 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-utilities\") pod \"redhat-marketplace-l6h6x\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.503157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bw8g\" (UniqueName: \"kubernetes.io/projected/3ed6f24b-f15e-4540-8fd2-1800a086a69e-kube-api-access-5bw8g\") pod \"redhat-marketplace-lhc2l\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.503197 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jwnj\" (UniqueName: \"kubernetes.io/projected/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-kube-api-access-5jwnj\") pod \"redhat-marketplace-l6h6x\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.503808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-catalog-content\") pod \"redhat-marketplace-lhc2l\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.503837 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-utilities\") pod \"redhat-marketplace-lhc2l\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.503998 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-catalog-content\") pod \"redhat-marketplace-l6h6x\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.504122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-utilities\") pod \"redhat-marketplace-l6h6x\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.660512 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:24 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:24 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:24 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.660562 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.713318 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.733936 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bw8g\" (UniqueName: \"kubernetes.io/projected/3ed6f24b-f15e-4540-8fd2-1800a086a69e-kube-api-access-5bw8g\") pod \"redhat-marketplace-lhc2l\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.736417 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jwnj\" (UniqueName: \"kubernetes.io/projected/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-kube-api-access-5jwnj\") pod \"redhat-marketplace-l6h6x\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.767624 4805 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4jnvf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 16 11:57:24 crc kubenswrapper[4805]: [+]log ok Dec 16 11:57:24 crc kubenswrapper[4805]: [+]etcd ok Dec 16 11:57:24 crc kubenswrapper[4805]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 16 11:57:24 crc kubenswrapper[4805]: [+]poststarthook/generic-apiserver-start-informers ok Dec 16 11:57:24 crc kubenswrapper[4805]: [+]poststarthook/max-in-flight-filter ok Dec 16 11:57:24 crc kubenswrapper[4805]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 16 11:57:24 crc kubenswrapper[4805]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 16 11:57:24 crc kubenswrapper[4805]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 16 11:57:24 crc kubenswrapper[4805]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 16 11:57:24 crc kubenswrapper[4805]: [+]poststarthook/project.openshift.io-projectcache ok Dec 16 11:57:24 crc kubenswrapper[4805]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 16 11:57:24 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-startinformers ok Dec 16 11:57:24 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 16 11:57:24 crc kubenswrapper[4805]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 16 11:57:24 crc kubenswrapper[4805]: livez check failed Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.767668 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" podUID="5826956e-b8ea-4053-8987-ebe9f350c975" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.817775 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prkth" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.934677 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:57:24 crc kubenswrapper[4805]: I1216 11:57:24.981929 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:57:25 crc kubenswrapper[4805]: I1216 11:57:25.053416 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 16 11:57:25 crc kubenswrapper[4805]: I1216 11:57:25.060577 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:57:25 crc kubenswrapper[4805]: I1216 11:57:25.061920 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:57:25 crc kubenswrapper[4805]: I1216 11:57:25.438489 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b0de69a3-3d67-4100-8d75-414bbbd2c52d","Type":"ContainerStarted","Data":"5d12319b65342ae0681c438a140d5221829399c8c0f359c6ec653412749de60e"} Dec 16 11:57:25 crc kubenswrapper[4805]: I1216 11:57:25.466094 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" event={"ID":"25907081-958d-4fb5-a8d2-ce8454adedfb","Type":"ContainerStarted","Data":"db1c9b47b929aea0a273a06775a86c674a2932297dc67f305c69c200e9533496"} Dec 16 11:57:25 crc kubenswrapper[4805]: I1216 11:57:25.576410 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:25 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:25 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:25 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:25 crc kubenswrapper[4805]: I1216 11:57:25.576468 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:26 crc kubenswrapper[4805]: I1216 11:57:26.363101 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q9ppd"] Dec 16 11:57:26 crc kubenswrapper[4805]: I1216 11:57:26.486192 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" event={"ID":"25907081-958d-4fb5-a8d2-ce8454adedfb","Type":"ContainerStarted","Data":"2449ce921aa9b33b0ab481837dfd81cd444baa386f95cea55efc1db02defbee4"} Dec 16 11:57:26 crc kubenswrapper[4805]: I1216 11:57:26.487302 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:26 crc kubenswrapper[4805]: I1216 11:57:26.549827 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:26 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:26 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:26 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:26 crc kubenswrapper[4805]: I1216 11:57:26.550514 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:26 crc kubenswrapper[4805]: I1216 11:57:26.607300 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" podStartSLOduration=98.607283715 podStartE2EDuration="1m38.607283715s" podCreationTimestamp="2025-12-16 11:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:26.567948522 +0000 UTC m=+120.286206337" watchObservedRunningTime="2025-12-16 11:57:26.607283715 +0000 UTC m=+120.325541530" Dec 16 11:57:26 crc kubenswrapper[4805]: I1216 11:57:26.634675 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 16 11:57:26 crc kubenswrapper[4805]: I1216 11:57:26.636132 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jc8cv"] Dec 16 11:57:26 crc kubenswrapper[4805]: I1216 11:57:26.808317 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dc7vx"] Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.099383 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8sb75"] Dec 16 11:57:27 crc kubenswrapper[4805]: W1216 11:57:27.132307 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf4f3221_afad_4a1d_8471_8018c2f08ddc.slice/crio-1982f272959844ac5765564d1ef26faf4b0d983c46ad334c1bb76cc6da17aeff WatchSource:0}: Error finding container 1982f272959844ac5765564d1ef26faf4b0d983c46ad334c1bb76cc6da17aeff: Status 404 returned error can't find the container with id 1982f272959844ac5765564d1ef26faf4b0d983c46ad334c1bb76cc6da17aeff Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.148980 4805 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4jnvf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]log ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]etcd ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]poststarthook/generic-apiserver-start-informers ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]poststarthook/max-in-flight-filter ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 16 11:57:27 crc kubenswrapper[4805]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 16 11:57:27 crc kubenswrapper[4805]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]poststarthook/project.openshift.io-projectcache ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-startinformers ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 16 11:57:27 crc kubenswrapper[4805]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 16 11:57:27 crc kubenswrapper[4805]: livez check failed Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.149058 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" podUID="5826956e-b8ea-4053-8987-ebe9f350c975" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.294133 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc2l"] Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.343182 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fcgh"] Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.351049 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mcqz5"] Dec 16 11:57:27 crc kubenswrapper[4805]: W1216 11:57:27.378282 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e0e2da1_5f69_4d95_b9eb_4b588258d3f0.slice/crio-8c7ed40816b68ca7d8107a7328d549c4610f646e82910bf2c6e58104e9f5d9c9 WatchSource:0}: Error finding container 8c7ed40816b68ca7d8107a7328d549c4610f646e82910bf2c6e58104e9f5d9c9: Status 404 returned error can't find the container with id 8c7ed40816b68ca7d8107a7328d549c4610f646e82910bf2c6e58104e9f5d9c9 Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.501259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcqz5" event={"ID":"6c8a922f-a887-401f-9f22-18355a0a81d7","Type":"ContainerStarted","Data":"9cf0b8fcca4045eabdda5eecd4f02cea49fb322fabf71789e71dce83e5b025e1"} Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.512433 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sb75" event={"ID":"cf4f3221-afad-4a1d-8471-8018c2f08ddc","Type":"ContainerStarted","Data":"1982f272959844ac5765564d1ef26faf4b0d983c46ad334c1bb76cc6da17aeff"} Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.522291 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:27 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:27 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:27 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.522339 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.530647 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dc7vx" event={"ID":"89930a1c-e5ae-4885-9dfd-f9c10df38c8a","Type":"ContainerStarted","Data":"11ba1bf4f6741d2c308f80d5982e8b7fb76f6efaac16d591404e15a3b698fdf7"} Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.530700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dc7vx" event={"ID":"89930a1c-e5ae-4885-9dfd-f9c10df38c8a","Type":"ContainerStarted","Data":"8526c46ae9b03e3abff8b9a130c20354a2742c66d8efbcf2ac3d0d4cf15e4c43"} Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.537668 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc2l" event={"ID":"3ed6f24b-f15e-4540-8fd2-1800a086a69e","Type":"ContainerStarted","Data":"bfa72c25e4b7b345f4dfac5a8e626ceee449d323e1c0c1f00aff99e35512404a"} Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.548095 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6h6x"] Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.576652 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9ppd" event={"ID":"7aba55d3-6790-4f26-9663-63bdf0c9991e","Type":"ContainerStarted","Data":"fd67a44001a7a2cd60cc3492d5028287e6b6f9efea38a792e86058117b770d85"} Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.604481 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b0de69a3-3d67-4100-8d75-414bbbd2c52d","Type":"ContainerStarted","Data":"7cdd2994bc75498ac748a3b8eda0ff4b9ca89f9e007661cac1e3c3a1e3a6dc91"} Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.607241 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"157f6828-6a78-4597-b17c-3cbbafead745","Type":"ContainerStarted","Data":"cda5d9ed06c424e9a615cab15e3d291e5dd5e9e14d6fa78c362ac70ab179d57b"} Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.622201 4805 patch_prober.go:28] interesting pod/console-f9d7485db-dxhld container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.622251 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dxhld" podUID="69858fb6-30f7-4b9b-a240-c95b4aa2de5a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.622729 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc8cv" event={"ID":"127838d6-328d-46cb-b942-6aed7bfd5048","Type":"ContainerStarted","Data":"f967ff59cb4cb32576be3fe8bf40bb2e15af221b9ac606a157e32094a8ee742e"} Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.666460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fcgh" event={"ID":"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0","Type":"ContainerStarted","Data":"8c7ed40816b68ca7d8107a7328d549c4610f646e82910bf2c6e58104e9f5d9c9"} Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.710290 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-28rcg" Dec 16 11:57:27 crc kubenswrapper[4805]: I1216 11:57:27.798728 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=7.798709092 podStartE2EDuration="7.798709092s" podCreationTimestamp="2025-12-16 11:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:27.659739617 +0000 UTC m=+121.377997422" watchObservedRunningTime="2025-12-16 11:57:27.798709092 +0000 UTC m=+121.516966907" Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.107571 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.135856 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.135917 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.136312 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.136339 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.136370 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-s9j77" Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.136976 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"728e9d4544342492fc4c0c031cf69fdf13f00d0bd2a0e58e4b38af871c74cb9f"} pod="openshift-console/downloads-7954f5f757-s9j77" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.137016 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" containerID="cri-o://728e9d4544342492fc4c0c031cf69fdf13f00d0bd2a0e58e4b38af871c74cb9f" gracePeriod=2 Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.137334 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.137360 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.515501 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:28 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:28 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:28 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.515759 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.673356 4805 generic.go:334] "Generic (PLEG): container finished" podID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerID="75109a9a53e466d1061baf2a8d854a739a6bf6ebb1f897a6d2668a52b65dbf5c" exitCode=0 Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.673663 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sb75" event={"ID":"cf4f3221-afad-4a1d-8471-8018c2f08ddc","Type":"ContainerDied","Data":"75109a9a53e466d1061baf2a8d854a739a6bf6ebb1f897a6d2668a52b65dbf5c"} Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.675774 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.676478 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"157f6828-6a78-4597-b17c-3cbbafead745","Type":"ContainerStarted","Data":"6ac1624be9983c979e962b8c089ca53233c94632681f3e72a4ecad17e21f2693"} Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.680170 4805 generic.go:334] "Generic (PLEG): container finished" podID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerID="cc7aac10b7f12d897512be95cb0a2bd7c74a5a39feb42c67502b0efcfa68f2d3" exitCode=0 Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.680330 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc2l" event={"ID":"3ed6f24b-f15e-4540-8fd2-1800a086a69e","Type":"ContainerDied","Data":"cc7aac10b7f12d897512be95cb0a2bd7c74a5a39feb42c67502b0efcfa68f2d3"} Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.681961 4805 generic.go:334] "Generic (PLEG): container finished" podID="7aba55d3-6790-4f26-9663-63bdf0c9991e" containerID="f6bd65b24ec53b49ef1b666236f492b5f31ed6478ac0946ae39d51ab231e840c" exitCode=0 Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.682034 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9ppd" event={"ID":"7aba55d3-6790-4f26-9663-63bdf0c9991e","Type":"ContainerDied","Data":"f6bd65b24ec53b49ef1b666236f492b5f31ed6478ac0946ae39d51ab231e840c"} Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.685793 4805 generic.go:334] "Generic (PLEG): container finished" podID="ce9b2f10-5fc2-4784-9c34-5fc3ed544115" containerID="2800e45a02b68fc9c957a021438bfc2a843fa13420d2c3618f2e56720135844b" exitCode=0 Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.685864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" event={"ID":"ce9b2f10-5fc2-4784-9c34-5fc3ed544115","Type":"ContainerDied","Data":"2800e45a02b68fc9c957a021438bfc2a843fa13420d2c3618f2e56720135844b"} Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.692986 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6h6x" event={"ID":"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73","Type":"ContainerStarted","Data":"7d974e6f029136d4ff861671e6a8ba83c1dd2171e7e00d7ce628cd1d93229b45"} Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.694734 4805 generic.go:334] "Generic (PLEG): container finished" podID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerID="728e9d4544342492fc4c0c031cf69fdf13f00d0bd2a0e58e4b38af871c74cb9f" exitCode=0 Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.694767 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s9j77" event={"ID":"184529ec-0250-4ed3-a1ef-4a5606202a85","Type":"ContainerDied","Data":"728e9d4544342492fc4c0c031cf69fdf13f00d0bd2a0e58e4b38af871c74cb9f"} Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.696562 4805 generic.go:334] "Generic (PLEG): container finished" podID="b0de69a3-3d67-4100-8d75-414bbbd2c52d" containerID="7cdd2994bc75498ac748a3b8eda0ff4b9ca89f9e007661cac1e3c3a1e3a6dc91" exitCode=0 Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.696712 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b0de69a3-3d67-4100-8d75-414bbbd2c52d","Type":"ContainerDied","Data":"7cdd2994bc75498ac748a3b8eda0ff4b9ca89f9e007661cac1e3c3a1e3a6dc91"} Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.697981 4805 generic.go:334] "Generic (PLEG): container finished" podID="127838d6-328d-46cb-b942-6aed7bfd5048" containerID="e7e3ba305de4b6b8f5837613db08019c8152481cc499cdde62b33ab666e4be07" exitCode=0 Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.698353 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc8cv" event={"ID":"127838d6-328d-46cb-b942-6aed7bfd5048","Type":"ContainerDied","Data":"e7e3ba305de4b6b8f5837613db08019c8152481cc499cdde62b33ab666e4be07"} Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.699757 4805 generic.go:334] "Generic (PLEG): container finished" podID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerID="11ba1bf4f6741d2c308f80d5982e8b7fb76f6efaac16d591404e15a3b698fdf7" exitCode=0 Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.699856 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dc7vx" event={"ID":"89930a1c-e5ae-4885-9dfd-f9c10df38c8a","Type":"ContainerDied","Data":"11ba1bf4f6741d2c308f80d5982e8b7fb76f6efaac16d591404e15a3b698fdf7"} Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.772711 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=8.772692213 podStartE2EDuration="8.772692213s" podCreationTimestamp="2025-12-16 11:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:57:28.746480447 +0000 UTC m=+122.464738272" watchObservedRunningTime="2025-12-16 11:57:28.772692213 +0000 UTC m=+122.490950018" Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.900962 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 11:57:28 crc kubenswrapper[4805]: I1216 11:57:28.964945 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzztn" Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.535833 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:29 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:29 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:29 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.536154 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.712582 4805 generic.go:334] "Generic (PLEG): container finished" podID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerID="3e2c778f14b0bf95f974a634c708be581b6df1fff315bddb1a704e83ca814d86" exitCode=0 Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.712616 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6h6x" event={"ID":"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73","Type":"ContainerDied","Data":"3e2c778f14b0bf95f974a634c708be581b6df1fff315bddb1a704e83ca814d86"} Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.765133 4805 generic.go:334] "Generic (PLEG): container finished" podID="157f6828-6a78-4597-b17c-3cbbafead745" containerID="6ac1624be9983c979e962b8c089ca53233c94632681f3e72a4ecad17e21f2693" exitCode=0 Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.765266 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"157f6828-6a78-4597-b17c-3cbbafead745","Type":"ContainerDied","Data":"6ac1624be9983c979e962b8c089ca53233c94632681f3e72a4ecad17e21f2693"} Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.768450 4805 generic.go:334] "Generic (PLEG): container finished" podID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerID="5f07ffa9dc4fddcf5a74426322f66df4d1d507553c8fc02b3ed97bf574fab80b" exitCode=0 Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.768536 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fcgh" event={"ID":"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0","Type":"ContainerDied","Data":"5f07ffa9dc4fddcf5a74426322f66df4d1d507553c8fc02b3ed97bf574fab80b"} Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.776902 4805 generic.go:334] "Generic (PLEG): container finished" podID="6c8a922f-a887-401f-9f22-18355a0a81d7" containerID="e71cb658cbda46ec6fbec7ae8dd6ee7fdcb51b918cfbbdc70c6d31c8e6db22ce" exitCode=0 Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.777016 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcqz5" event={"ID":"6c8a922f-a887-401f-9f22-18355a0a81d7","Type":"ContainerDied","Data":"e71cb658cbda46ec6fbec7ae8dd6ee7fdcb51b918cfbbdc70c6d31c8e6db22ce"} Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.782773 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s9j77" event={"ID":"184529ec-0250-4ed3-a1ef-4a5606202a85","Type":"ContainerStarted","Data":"c705467684fbe639104cc62c262c2df65cbc8144039ebfd001f09b244f2a45b0"} Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.782818 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-s9j77" Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.782988 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:29 crc kubenswrapper[4805]: I1216 11:57:29.783018 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.533804 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:30 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:30 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:30 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.534666 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.575214 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.575815 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.690966 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-config-volume\") pod \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.691062 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kubelet-dir\") pod \"b0de69a3-3d67-4100-8d75-414bbbd2c52d\" (UID: \"b0de69a3-3d67-4100-8d75-414bbbd2c52d\") " Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.691096 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-secret-volume\") pod \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.691224 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kube-api-access\") pod \"b0de69a3-3d67-4100-8d75-414bbbd2c52d\" (UID: \"b0de69a3-3d67-4100-8d75-414bbbd2c52d\") " Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.691244 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4k89\" (UniqueName: \"kubernetes.io/projected/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-kube-api-access-s4k89\") pod \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\" (UID: \"ce9b2f10-5fc2-4784-9c34-5fc3ed544115\") " Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.692729 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0de69a3-3d67-4100-8d75-414bbbd2c52d" (UID: "b0de69a3-3d67-4100-8d75-414bbbd2c52d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.694435 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-config-volume" (OuterVolumeSpecName: "config-volume") pod "ce9b2f10-5fc2-4784-9c34-5fc3ed544115" (UID: "ce9b2f10-5fc2-4784-9c34-5fc3ed544115"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.706034 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-kube-api-access-s4k89" (OuterVolumeSpecName: "kube-api-access-s4k89") pod "ce9b2f10-5fc2-4784-9c34-5fc3ed544115" (UID: "ce9b2f10-5fc2-4784-9c34-5fc3ed544115"). InnerVolumeSpecName "kube-api-access-s4k89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.711337 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0de69a3-3d67-4100-8d75-414bbbd2c52d" (UID: "b0de69a3-3d67-4100-8d75-414bbbd2c52d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.739740 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ce9b2f10-5fc2-4784-9c34-5fc3ed544115" (UID: "ce9b2f10-5fc2-4784-9c34-5fc3ed544115"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.793230 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.793272 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.793308 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0de69a3-3d67-4100-8d75-414bbbd2c52d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.793323 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4k89\" (UniqueName: \"kubernetes.io/projected/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-kube-api-access-s4k89\") on node \"crc\" DevicePath \"\"" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.793334 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce9b2f10-5fc2-4784-9c34-5fc3ed544115-config-volume\") on node \"crc\" DevicePath \"\"" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.873444 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" event={"ID":"ce9b2f10-5fc2-4784-9c34-5fc3ed544115","Type":"ContainerDied","Data":"c21cc060b0903e4df6b43783302e0463ad4710a4018e8d3ad33e5a2a04cf5277"} Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.873491 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c21cc060b0903e4df6b43783302e0463ad4710a4018e8d3ad33e5a2a04cf5277" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.873566 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.916922 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.917107 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b0de69a3-3d67-4100-8d75-414bbbd2c52d","Type":"ContainerDied","Data":"5d12319b65342ae0681c438a140d5221829399c8c0f359c6ec653412749de60e"} Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.917221 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d12319b65342ae0681c438a140d5221829399c8c0f359c6ec653412749de60e" Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.918090 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:30 crc kubenswrapper[4805]: I1216 11:57:30.918187 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:31 crc kubenswrapper[4805]: I1216 11:57:31.512122 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:31 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:31 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:31 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:31 crc kubenswrapper[4805]: I1216 11:57:31.512223 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.210789 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.222979 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4jnvf" Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.451683 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.517575 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:32 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:32 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:32 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.517660 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.656596 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/157f6828-6a78-4597-b17c-3cbbafead745-kubelet-dir\") pod \"157f6828-6a78-4597-b17c-3cbbafead745\" (UID: \"157f6828-6a78-4597-b17c-3cbbafead745\") " Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.656650 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/157f6828-6a78-4597-b17c-3cbbafead745-kube-api-access\") pod \"157f6828-6a78-4597-b17c-3cbbafead745\" (UID: \"157f6828-6a78-4597-b17c-3cbbafead745\") " Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.657749 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/157f6828-6a78-4597-b17c-3cbbafead745-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "157f6828-6a78-4597-b17c-3cbbafead745" (UID: "157f6828-6a78-4597-b17c-3cbbafead745"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.679176 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157f6828-6a78-4597-b17c-3cbbafead745-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "157f6828-6a78-4597-b17c-3cbbafead745" (UID: "157f6828-6a78-4597-b17c-3cbbafead745"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.764152 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/157f6828-6a78-4597-b17c-3cbbafead745-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.764189 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/157f6828-6a78-4597-b17c-3cbbafead745-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.974312 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.974328 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"157f6828-6a78-4597-b17c-3cbbafead745","Type":"ContainerDied","Data":"cda5d9ed06c424e9a615cab15e3d291e5dd5e9e14d6fa78c362ac70ab179d57b"} Dec 16 11:57:32 crc kubenswrapper[4805]: I1216 11:57:32.974394 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cda5d9ed06c424e9a615cab15e3d291e5dd5e9e14d6fa78c362ac70ab179d57b" Dec 16 11:57:33 crc kubenswrapper[4805]: I1216 11:57:33.607581 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:33 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:33 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:33 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:33 crc kubenswrapper[4805]: I1216 11:57:33.607641 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:34 crc kubenswrapper[4805]: I1216 11:57:34.517782 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:34 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:34 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:34 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:34 crc kubenswrapper[4805]: I1216 11:57:34.518173 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:35 crc kubenswrapper[4805]: I1216 11:57:35.525748 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:35 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:35 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:35 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:35 crc kubenswrapper[4805]: I1216 11:57:35.525833 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:36 crc kubenswrapper[4805]: I1216 11:57:36.511243 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:36 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:36 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:36 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:36 crc kubenswrapper[4805]: I1216 11:57:36.511314 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:37 crc kubenswrapper[4805]: I1216 11:57:37.517471 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:37 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:37 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:37 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:37 crc kubenswrapper[4805]: I1216 11:57:37.517820 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:37 crc kubenswrapper[4805]: I1216 11:57:37.611369 4805 patch_prober.go:28] interesting pod/console-f9d7485db-dxhld container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 16 11:57:37 crc kubenswrapper[4805]: I1216 11:57:37.611448 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dxhld" podUID="69858fb6-30f7-4b9b-a240-c95b4aa2de5a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 16 11:57:38 crc kubenswrapper[4805]: I1216 11:57:38.216779 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:38 crc kubenswrapper[4805]: I1216 11:57:38.216855 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:38 crc kubenswrapper[4805]: I1216 11:57:38.217580 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:38 crc kubenswrapper[4805]: I1216 11:57:38.217637 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:38 crc kubenswrapper[4805]: I1216 11:57:38.531398 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:38 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:38 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:38 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:38 crc kubenswrapper[4805]: I1216 11:57:38.531468 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:39 crc kubenswrapper[4805]: I1216 11:57:39.540996 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:39 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:39 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:39 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:39 crc kubenswrapper[4805]: I1216 11:57:39.553263 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:40 crc kubenswrapper[4805]: I1216 11:57:40.514744 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:40 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:40 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:40 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:40 crc kubenswrapper[4805]: I1216 11:57:40.514846 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:41 crc kubenswrapper[4805]: I1216 11:57:41.178285 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 11:57:41 crc kubenswrapper[4805]: I1216 11:57:41.522324 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:41 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:41 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:41 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:41 crc kubenswrapper[4805]: I1216 11:57:41.522377 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:42 crc kubenswrapper[4805]: I1216 11:57:42.521929 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:42 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:42 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:42 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:42 crc kubenswrapper[4805]: I1216 11:57:42.521980 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:43 crc kubenswrapper[4805]: I1216 11:57:43.590032 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:43 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:43 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:43 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:43 crc kubenswrapper[4805]: I1216 11:57:43.590102 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:44 crc kubenswrapper[4805]: I1216 11:57:44.554556 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:44 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Dec 16 11:57:44 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:44 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:44 crc kubenswrapper[4805]: I1216 11:57:44.554942 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:45 crc kubenswrapper[4805]: I1216 11:57:45.516889 4805 patch_prober.go:28] interesting pod/router-default-5444994796-j4rx5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 16 11:57:45 crc kubenswrapper[4805]: [+]has-synced ok Dec 16 11:57:45 crc kubenswrapper[4805]: [+]process-running ok Dec 16 11:57:45 crc kubenswrapper[4805]: healthz check failed Dec 16 11:57:45 crc kubenswrapper[4805]: I1216 11:57:45.516949 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j4rx5" podUID="1d60eaf3-b804-48a1-bc67-710f8ce5805f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 11:57:46 crc kubenswrapper[4805]: I1216 11:57:46.540746 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:57:46 crc kubenswrapper[4805]: I1216 11:57:46.543476 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-j4rx5" Dec 16 11:57:47 crc kubenswrapper[4805]: I1216 11:57:47.703101 4805 patch_prober.go:28] interesting pod/console-f9d7485db-dxhld container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 16 11:57:47 crc kubenswrapper[4805]: I1216 11:57:47.703256 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dxhld" podUID="69858fb6-30f7-4b9b-a240-c95b4aa2de5a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 16 11:57:48 crc kubenswrapper[4805]: I1216 11:57:48.143087 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:48 crc kubenswrapper[4805]: I1216 11:57:48.144102 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:48 crc kubenswrapper[4805]: I1216 11:57:48.144878 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:48 crc kubenswrapper[4805]: I1216 11:57:48.144968 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:49 crc kubenswrapper[4805]: I1216 11:57:49.064474 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqm6m" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.606655 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.607026 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.607076 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.607133 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.609764 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.610218 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.610364 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.638019 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.660025 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.663385 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.672193 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.711313 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.753836 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.900676 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 16 11:57:51 crc kubenswrapper[4805]: I1216 11:57:51.901821 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 16 11:57:57 crc kubenswrapper[4805]: I1216 11:57:57.119770 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 11:57:57 crc kubenswrapper[4805]: I1216 11:57:57.120355 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 11:57:57 crc kubenswrapper[4805]: I1216 11:57:57.621379 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:57:57 crc kubenswrapper[4805]: I1216 11:57:57.625113 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-dxhld" Dec 16 11:57:58 crc kubenswrapper[4805]: I1216 11:57:58.137441 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:58 crc kubenswrapper[4805]: I1216 11:57:58.137484 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:58 crc kubenswrapper[4805]: I1216 11:57:58.137700 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:58 crc kubenswrapper[4805]: I1216 11:57:58.137713 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:58 crc kubenswrapper[4805]: I1216 11:57:58.137735 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-s9j77" Dec 16 11:57:58 crc kubenswrapper[4805]: I1216 11:57:58.138062 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c705467684fbe639104cc62c262c2df65cbc8144039ebfd001f09b244f2a45b0"} pod="openshift-console/downloads-7954f5f757-s9j77" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 16 11:57:58 crc kubenswrapper[4805]: I1216 11:57:58.138086 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" containerID="cri-o://c705467684fbe639104cc62c262c2df65cbc8144039ebfd001f09b244f2a45b0" gracePeriod=2 Dec 16 11:57:58 crc kubenswrapper[4805]: I1216 11:57:58.138495 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:57:58 crc kubenswrapper[4805]: I1216 11:57:58.138512 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:57:59 crc kubenswrapper[4805]: I1216 11:57:59.139594 4805 generic.go:334] "Generic (PLEG): container finished" podID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerID="c705467684fbe639104cc62c262c2df65cbc8144039ebfd001f09b244f2a45b0" exitCode=0 Dec 16 11:57:59 crc kubenswrapper[4805]: I1216 11:57:59.139945 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s9j77" event={"ID":"184529ec-0250-4ed3-a1ef-4a5606202a85","Type":"ContainerDied","Data":"c705467684fbe639104cc62c262c2df65cbc8144039ebfd001f09b244f2a45b0"} Dec 16 11:57:59 crc kubenswrapper[4805]: I1216 11:57:59.139988 4805 scope.go:117] "RemoveContainer" containerID="728e9d4544342492fc4c0c031cf69fdf13f00d0bd2a0e58e4b38af871c74cb9f" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.698900 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 16 11:58:04 crc kubenswrapper[4805]: E1216 11:58:04.699789 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9b2f10-5fc2-4784-9c34-5fc3ed544115" containerName="collect-profiles" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.699805 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9b2f10-5fc2-4784-9c34-5fc3ed544115" containerName="collect-profiles" Dec 16 11:58:04 crc kubenswrapper[4805]: E1216 11:58:04.699823 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157f6828-6a78-4597-b17c-3cbbafead745" containerName="pruner" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.699832 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="157f6828-6a78-4597-b17c-3cbbafead745" containerName="pruner" Dec 16 11:58:04 crc kubenswrapper[4805]: E1216 11:58:04.699855 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0de69a3-3d67-4100-8d75-414bbbd2c52d" containerName="pruner" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.699871 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0de69a3-3d67-4100-8d75-414bbbd2c52d" containerName="pruner" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.700060 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9b2f10-5fc2-4784-9c34-5fc3ed544115" containerName="collect-profiles" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.700081 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0de69a3-3d67-4100-8d75-414bbbd2c52d" containerName="pruner" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.700110 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="157f6828-6a78-4597-b17c-3cbbafead745" containerName="pruner" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.700755 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.704632 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.705478 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.739721 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.741599 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.741658 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.842481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.842630 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.842732 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 16 11:58:04 crc kubenswrapper[4805]: I1216 11:58:04.866850 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 16 11:58:05 crc kubenswrapper[4805]: I1216 11:58:05.033127 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 16 11:58:08 crc kubenswrapper[4805]: I1216 11:58:08.135364 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:58:08 crc kubenswrapper[4805]: I1216 11:58:08.136020 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.082501 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.084190 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.089779 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.139345 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21c4a860-6898-4e21-9425-9925d0b3380b-kube-api-access\") pod \"installer-9-crc\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.139408 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-var-lock\") pod \"installer-9-crc\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.139444 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.241084 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21c4a860-6898-4e21-9425-9925d0b3380b-kube-api-access\") pod \"installer-9-crc\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.241310 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-var-lock\") pod \"installer-9-crc\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.241400 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.241442 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-var-lock\") pod \"installer-9-crc\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.241593 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.265280 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21c4a860-6898-4e21-9425-9925d0b3380b-kube-api-access\") pod \"installer-9-crc\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:58:10 crc kubenswrapper[4805]: I1216 11:58:10.424410 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:58:18 crc kubenswrapper[4805]: I1216 11:58:18.138244 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:58:18 crc kubenswrapper[4805]: I1216 11:58:18.138811 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:58:18 crc kubenswrapper[4805]: I1216 11:58:18.407991 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwpbn"] Dec 16 11:58:23 crc kubenswrapper[4805]: E1216 11:58:23.107627 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 16 11:58:23 crc kubenswrapper[4805]: E1216 11:58:23.108470 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lz9t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mcqz5_openshift-marketplace(6c8a922f-a887-401f-9f22-18355a0a81d7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 16 11:58:23 crc kubenswrapper[4805]: E1216 11:58:23.109613 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mcqz5" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" Dec 16 11:58:23 crc kubenswrapper[4805]: E1216 11:58:23.520955 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 16 11:58:23 crc kubenswrapper[4805]: E1216 11:58:23.521177 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bw8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lhc2l_openshift-marketplace(3ed6f24b-f15e-4540-8fd2-1800a086a69e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 16 11:58:23 crc kubenswrapper[4805]: E1216 11:58:23.522407 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lhc2l" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" Dec 16 11:58:27 crc kubenswrapper[4805]: I1216 11:58:27.071836 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 11:58:27 crc kubenswrapper[4805]: I1216 11:58:27.072162 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 11:58:27 crc kubenswrapper[4805]: E1216 11:58:27.903965 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mcqz5" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" Dec 16 11:58:27 crc kubenswrapper[4805]: E1216 11:58:27.904067 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lhc2l" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" Dec 16 11:58:28 crc kubenswrapper[4805]: E1216 11:58:28.096394 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 16 11:58:28 crc kubenswrapper[4805]: E1216 11:58:28.096848 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-265tc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4fcgh_openshift-marketplace(4e0e2da1-5f69-4d95-b9eb-4b588258d3f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 16 11:58:28 crc kubenswrapper[4805]: E1216 11:58:28.098394 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4fcgh" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" Dec 16 11:58:28 crc kubenswrapper[4805]: I1216 11:58:28.135451 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:58:28 crc kubenswrapper[4805]: I1216 11:58:28.135606 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.615942 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4fcgh" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.705882 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.706206 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxgff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jc8cv_openshift-marketplace(127838d6-328d-46cb-b942-6aed7bfd5048): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.707495 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-jc8cv" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.771033 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.771644 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffz4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-q9ppd_openshift-marketplace(7aba55d3-6790-4f26-9663-63bdf0c9991e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.773264 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-q9ppd" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.785493 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.785684 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbv75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dc7vx_openshift-marketplace(89930a1c-e5ae-4885-9dfd-f9c10df38c8a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.787086 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dc7vx" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.815093 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.815284 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lm5p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8sb75_openshift-marketplace(cf4f3221-afad-4a1d-8471-8018c2f08ddc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 16 11:58:31 crc kubenswrapper[4805]: E1216 11:58:31.816657 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8sb75" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" Dec 16 11:58:32 crc kubenswrapper[4805]: I1216 11:58:32.570949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ff5af455d76205129ccc79568217b3e6d09f078590e2ccd6e801d186eaa462f3"} Dec 16 11:58:32 crc kubenswrapper[4805]: I1216 11:58:32.577347 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6h6x" event={"ID":"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73","Type":"ContainerStarted","Data":"52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd"} Dec 16 11:58:32 crc kubenswrapper[4805]: I1216 11:58:32.582943 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s9j77" event={"ID":"184529ec-0250-4ed3-a1ef-4a5606202a85","Type":"ContainerStarted","Data":"5773aba545171d28f1cd481be821f42aa0a869845134449b3464bb45f2ff7964"} Dec 16 11:58:32 crc kubenswrapper[4805]: I1216 11:58:32.583908 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:58:32 crc kubenswrapper[4805]: I1216 11:58:32.584001 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:58:32 crc kubenswrapper[4805]: I1216 11:58:32.584360 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-s9j77" Dec 16 11:58:32 crc kubenswrapper[4805]: E1216 11:58:32.606855 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jc8cv" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" Dec 16 11:58:32 crc kubenswrapper[4805]: E1216 11:58:32.607184 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8sb75" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" Dec 16 11:58:32 crc kubenswrapper[4805]: E1216 11:58:32.615791 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dc7vx" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" Dec 16 11:58:32 crc kubenswrapper[4805]: E1216 11:58:32.615858 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-q9ppd" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" Dec 16 11:58:32 crc kubenswrapper[4805]: I1216 11:58:32.766171 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 16 11:58:32 crc kubenswrapper[4805]: I1216 11:58:32.835206 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 16 11:58:33 crc kubenswrapper[4805]: I1216 11:58:33.591587 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"169e44b048b9e135544d0ba84fcd02f73354cbc7cc850e066649636782b16575"} Dec 16 11:58:33 crc kubenswrapper[4805]: I1216 11:58:33.594085 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2dec4c9dd55e5661c45abbbe3f08667038b960592576aab35c6b1d7e6de5cb6c"} Dec 16 11:58:33 crc kubenswrapper[4805]: I1216 11:58:33.596751 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bfecd6c681b836e759f4b7004909c84b2600628fbab11303ba1904accf1adf3f"} Dec 16 11:58:33 crc kubenswrapper[4805]: I1216 11:58:33.596783 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ba1c08a45e5d7560d0736245f7b6ea9b80e820e4d14c1955c48ba3eb6dca64df"} Dec 16 11:58:33 crc kubenswrapper[4805]: I1216 11:58:33.598466 4805 generic.go:334] "Generic (PLEG): container finished" podID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerID="52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd" exitCode=0 Dec 16 11:58:33 crc kubenswrapper[4805]: I1216 11:58:33.598510 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6h6x" event={"ID":"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73","Type":"ContainerDied","Data":"52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd"} Dec 16 11:58:33 crc kubenswrapper[4805]: I1216 11:58:33.604824 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21c4a860-6898-4e21-9425-9925d0b3380b","Type":"ContainerStarted","Data":"09ab325740cade80d7791920865e7cf35df16b2983d7e195ff3253b7865fcb5b"} Dec 16 11:58:33 crc kubenswrapper[4805]: I1216 11:58:33.606924 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e","Type":"ContainerStarted","Data":"535c55fd0436cc53b0ce3392cd79e1ba8a5424819a0c580006e9610eb8baef5f"} Dec 16 11:58:33 crc kubenswrapper[4805]: I1216 11:58:33.608835 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:58:33 crc kubenswrapper[4805]: I1216 11:58:33.608906 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:58:34 crc kubenswrapper[4805]: I1216 11:58:34.616556 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21c4a860-6898-4e21-9425-9925d0b3380b","Type":"ContainerStarted","Data":"c7c3e6305ed15815bb98ba8b9ae77b0913d6b33fd0a17964912fb56eca048670"} Dec 16 11:58:34 crc kubenswrapper[4805]: I1216 11:58:34.619461 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e","Type":"ContainerStarted","Data":"8626c034de2c7c9704e219e147c120f25c81fbd3b6723d90b9781a168a067db1"} Dec 16 11:58:34 crc kubenswrapper[4805]: I1216 11:58:34.622850 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9d43f0b4d445ec00b4d389a1fc98e90120f595666458a4c4255d662ec7eeb771"} Dec 16 11:58:34 crc kubenswrapper[4805]: I1216 11:58:34.623286 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:58:34 crc kubenswrapper[4805]: I1216 11:58:34.624004 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:58:34 crc kubenswrapper[4805]: I1216 11:58:34.624054 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:58:34 crc kubenswrapper[4805]: I1216 11:58:34.644868 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=24.644846508 podStartE2EDuration="24.644846508s" podCreationTimestamp="2025-12-16 11:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:58:34.644753255 +0000 UTC m=+188.363011060" watchObservedRunningTime="2025-12-16 11:58:34.644846508 +0000 UTC m=+188.363104323" Dec 16 11:58:35 crc kubenswrapper[4805]: I1216 11:58:35.630493 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e" containerID="8626c034de2c7c9704e219e147c120f25c81fbd3b6723d90b9781a168a067db1" exitCode=0 Dec 16 11:58:35 crc kubenswrapper[4805]: I1216 11:58:35.630558 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e","Type":"ContainerDied","Data":"8626c034de2c7c9704e219e147c120f25c81fbd3b6723d90b9781a168a067db1"} Dec 16 11:58:36 crc kubenswrapper[4805]: I1216 11:58:36.886528 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 16 11:58:37 crc kubenswrapper[4805]: I1216 11:58:37.063570 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kube-api-access\") pod \"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e\" (UID: \"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e\") " Dec 16 11:58:37 crc kubenswrapper[4805]: I1216 11:58:37.063719 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kubelet-dir\") pod \"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e\" (UID: \"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e\") " Dec 16 11:58:37 crc kubenswrapper[4805]: I1216 11:58:37.064010 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e" (UID: "ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 11:58:37 crc kubenswrapper[4805]: I1216 11:58:37.071517 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e" (UID: "ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:58:37 crc kubenswrapper[4805]: I1216 11:58:37.165527 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:37 crc kubenswrapper[4805]: I1216 11:58:37.165578 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:37 crc kubenswrapper[4805]: I1216 11:58:37.642435 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6h6x" event={"ID":"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73","Type":"ContainerStarted","Data":"2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb"} Dec 16 11:58:37 crc kubenswrapper[4805]: I1216 11:58:37.643845 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e","Type":"ContainerDied","Data":"535c55fd0436cc53b0ce3392cd79e1ba8a5424819a0c580006e9610eb8baef5f"} Dec 16 11:58:37 crc kubenswrapper[4805]: I1216 11:58:37.644046 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="535c55fd0436cc53b0ce3392cd79e1ba8a5424819a0c580006e9610eb8baef5f" Dec 16 11:58:37 crc kubenswrapper[4805]: I1216 11:58:37.643914 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 16 11:58:38 crc kubenswrapper[4805]: I1216 11:58:38.135321 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:58:38 crc kubenswrapper[4805]: I1216 11:58:38.136234 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:58:38 crc kubenswrapper[4805]: I1216 11:58:38.136596 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9j77 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 16 11:58:38 crc kubenswrapper[4805]: I1216 11:58:38.136729 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9j77" podUID="184529ec-0250-4ed3-a1ef-4a5606202a85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 16 11:58:38 crc kubenswrapper[4805]: I1216 11:58:38.670367 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l6h6x" podStartSLOduration=8.406149527 podStartE2EDuration="1m14.670348409s" podCreationTimestamp="2025-12-16 11:57:24 +0000 UTC" firstStartedPulling="2025-12-16 11:57:29.716600137 +0000 UTC m=+123.434857942" lastFinishedPulling="2025-12-16 11:58:35.980799019 +0000 UTC m=+189.699056824" observedRunningTime="2025-12-16 11:58:38.665673765 +0000 UTC m=+192.383931570" watchObservedRunningTime="2025-12-16 11:58:38.670348409 +0000 UTC m=+192.388606224" Dec 16 11:58:43 crc kubenswrapper[4805]: I1216 11:58:43.537438 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" podUID="832ba633-f44c-4aa1-8791-65656ed2a744" containerName="oauth-openshift" containerID="cri-o://a55a68464c3d51078055096d330283b75e84d74dfda78385ce64b8a1eb75c76c" gracePeriod=15 Dec 16 11:58:44 crc kubenswrapper[4805]: I1216 11:58:44.679593 4805 generic.go:334] "Generic (PLEG): container finished" podID="832ba633-f44c-4aa1-8791-65656ed2a744" containerID="a55a68464c3d51078055096d330283b75e84d74dfda78385ce64b8a1eb75c76c" exitCode=0 Dec 16 11:58:44 crc kubenswrapper[4805]: I1216 11:58:44.679638 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" event={"ID":"832ba633-f44c-4aa1-8791-65656ed2a744","Type":"ContainerDied","Data":"a55a68464c3d51078055096d330283b75e84d74dfda78385ce64b8a1eb75c76c"} Dec 16 11:58:44 crc kubenswrapper[4805]: I1216 11:58:44.959133 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:58:44 crc kubenswrapper[4805]: I1216 11:58:44.959211 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:58:45 crc kubenswrapper[4805]: I1216 11:58:45.744112 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:58:45 crc kubenswrapper[4805]: I1216 11:58:45.812423 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.073721 4805 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mwpbn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.073783 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" podUID="832ba633-f44c-4aa1-8791-65656ed2a744" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.157222 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-s9j77" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.631764 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653067 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-session\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653175 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-router-certs\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653225 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832ba633-f44c-4aa1-8791-65656ed2a744-audit-dir\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653278 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-error\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653308 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-ocp-branding-template\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653348 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-idp-0-file-data\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653388 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-trusted-ca-bundle\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653436 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-provider-selection\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653448 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/832ba633-f44c-4aa1-8791-65656ed2a744-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653481 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-cliconfig\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653625 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcwwz\" (UniqueName: \"kubernetes.io/projected/832ba633-f44c-4aa1-8791-65656ed2a744-kube-api-access-bcwwz\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653716 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-serving-cert\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653764 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-service-ca\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653839 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-audit-policies\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.653899 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-login\") pod \"832ba633-f44c-4aa1-8791-65656ed2a744\" (UID: \"832ba633-f44c-4aa1-8791-65656ed2a744\") " Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.654454 4805 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/832ba633-f44c-4aa1-8791-65656ed2a744-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.654919 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.658348 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.658533 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.665550 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.684277 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.684771 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.702022 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.703015 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.703245 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.703588 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.703876 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832ba633-f44c-4aa1-8791-65656ed2a744-kube-api-access-bcwwz" (OuterVolumeSpecName: "kube-api-access-bcwwz") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "kube-api-access-bcwwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.704112 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.704395 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "832ba633-f44c-4aa1-8791-65656ed2a744" (UID: "832ba633-f44c-4aa1-8791-65656ed2a744"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.722540 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" event={"ID":"832ba633-f44c-4aa1-8791-65656ed2a744","Type":"ContainerDied","Data":"77fc86a82e094a301d5737e72d87db3a216ca4893e7e853dbc8a7b6228506833"} Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.722607 4805 scope.go:117] "RemoveContainer" containerID="a55a68464c3d51078055096d330283b75e84d74dfda78385ce64b8a1eb75c76c" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.722712 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mwpbn" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.733110 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-fdd74686d-9pqrq"] Dec 16 11:58:48 crc kubenswrapper[4805]: E1216 11:58:48.733438 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832ba633-f44c-4aa1-8791-65656ed2a744" containerName="oauth-openshift" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.733463 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="832ba633-f44c-4aa1-8791-65656ed2a744" containerName="oauth-openshift" Dec 16 11:58:48 crc kubenswrapper[4805]: E1216 11:58:48.733486 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e" containerName="pruner" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.733494 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e" containerName="pruner" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.733643 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca60c1a4-e1c2-47a1-8235-7fc299dd9b2e" containerName="pruner" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.733666 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="832ba633-f44c-4aa1-8791-65656ed2a744" containerName="oauth-openshift" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.734291 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.738098 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.738164 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.738352 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.738662 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.738759 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.738934 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.738979 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.739113 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.739328 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.739371 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.740210 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.741028 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.744485 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fdd74686d-9pqrq"] Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.755017 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.757809 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.758681 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-router-certs\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.758740 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.758769 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.758797 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.758839 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlfrn\" (UniqueName: \"kubernetes.io/projected/039578e7-2cc6-4ea2-acd2-deea629f905e-kube-api-access-mlfrn\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.758953 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-service-ca\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.759107 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.759476 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/039578e7-2cc6-4ea2-acd2-deea629f905e-audit-dir\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.759633 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-session\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.759739 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-template-error\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.759845 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.759959 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-audit-policies\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.760118 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.760336 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-template-login\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.761172 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.761292 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.761376 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.761485 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.761557 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.761634 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.761704 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.761765 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.761821 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.761891 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.761954 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcwwz\" (UniqueName: \"kubernetes.io/projected/832ba633-f44c-4aa1-8791-65656ed2a744-kube-api-access-bcwwz\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.762014 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.762279 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/832ba633-f44c-4aa1-8791-65656ed2a744-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.786615 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwpbn"] Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.787906 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.790515 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mwpbn"] Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863558 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-service-ca\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863608 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863670 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/039578e7-2cc6-4ea2-acd2-deea629f905e-audit-dir\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863690 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-session\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863713 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-template-error\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863752 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863775 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-audit-policies\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863802 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863877 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-template-login\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863925 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-router-certs\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863959 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.863983 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.864017 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.864049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlfrn\" (UniqueName: \"kubernetes.io/projected/039578e7-2cc6-4ea2-acd2-deea629f905e-kube-api-access-mlfrn\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.864533 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-service-ca\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.865822 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.866279 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/039578e7-2cc6-4ea2-acd2-deea629f905e-audit-dir\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.867043 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-audit-policies\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.869009 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.869690 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.869992 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.870075 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-template-login\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.870316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-template-error\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.870996 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-session\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.871597 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.881014 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-router-certs\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.872472 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/039578e7-2cc6-4ea2-acd2-deea629f905e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:48 crc kubenswrapper[4805]: I1216 11:58:48.887909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlfrn\" (UniqueName: \"kubernetes.io/projected/039578e7-2cc6-4ea2-acd2-deea629f905e-kube-api-access-mlfrn\") pod \"oauth-openshift-fdd74686d-9pqrq\" (UID: \"039578e7-2cc6-4ea2-acd2-deea629f905e\") " pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:49 crc kubenswrapper[4805]: I1216 11:58:49.080231 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:49 crc kubenswrapper[4805]: I1216 11:58:49.493160 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fdd74686d-9pqrq"] Dec 16 11:58:49 crc kubenswrapper[4805]: I1216 11:58:49.729059 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcqz5" event={"ID":"6c8a922f-a887-401f-9f22-18355a0a81d7","Type":"ContainerStarted","Data":"de7fb3c43e8cc3af6824eb8b8fe1b440af91cfe0d8e7be2f8a9755c637b205ce"} Dec 16 11:58:49 crc kubenswrapper[4805]: I1216 11:58:49.730902 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" event={"ID":"039578e7-2cc6-4ea2-acd2-deea629f905e","Type":"ContainerStarted","Data":"00d98e67f7fd0a0a15d707e667b0adb9e31da2bb393c6443472443e24a4cf70d"} Dec 16 11:58:50 crc kubenswrapper[4805]: I1216 11:58:50.540184 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832ba633-f44c-4aa1-8791-65656ed2a744" path="/var/lib/kubelet/pods/832ba633-f44c-4aa1-8791-65656ed2a744/volumes" Dec 16 11:58:50 crc kubenswrapper[4805]: I1216 11:58:50.748547 4805 generic.go:334] "Generic (PLEG): container finished" podID="6c8a922f-a887-401f-9f22-18355a0a81d7" containerID="de7fb3c43e8cc3af6824eb8b8fe1b440af91cfe0d8e7be2f8a9755c637b205ce" exitCode=0 Dec 16 11:58:50 crc kubenswrapper[4805]: I1216 11:58:50.748645 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcqz5" event={"ID":"6c8a922f-a887-401f-9f22-18355a0a81d7","Type":"ContainerDied","Data":"de7fb3c43e8cc3af6824eb8b8fe1b440af91cfe0d8e7be2f8a9755c637b205ce"} Dec 16 11:58:51 crc kubenswrapper[4805]: I1216 11:58:51.756454 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" event={"ID":"039578e7-2cc6-4ea2-acd2-deea629f905e","Type":"ContainerStarted","Data":"79940bdaede3c0b5da44d041ad2814bf848b7695e569563f151170370e663034"} Dec 16 11:58:52 crc kubenswrapper[4805]: I1216 11:58:52.763554 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:52 crc kubenswrapper[4805]: I1216 11:58:52.797968 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" podStartSLOduration=34.797940025 podStartE2EDuration="34.797940025s" podCreationTimestamp="2025-12-16 11:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:58:52.79011519 +0000 UTC m=+206.508373015" watchObservedRunningTime="2025-12-16 11:58:52.797940025 +0000 UTC m=+206.516197900" Dec 16 11:58:52 crc kubenswrapper[4805]: I1216 11:58:52.925560 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-fdd74686d-9pqrq" Dec 16 11:58:57 crc kubenswrapper[4805]: I1216 11:58:57.071460 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 11:58:57 crc kubenswrapper[4805]: I1216 11:58:57.073332 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 11:58:57 crc kubenswrapper[4805]: I1216 11:58:57.073422 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 11:58:57 crc kubenswrapper[4805]: I1216 11:58:57.074117 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 11:58:57 crc kubenswrapper[4805]: I1216 11:58:57.074195 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923" gracePeriod=600 Dec 16 11:58:59 crc kubenswrapper[4805]: I1216 11:58:59.815659 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923" exitCode=0 Dec 16 11:58:59 crc kubenswrapper[4805]: I1216 11:58:59.815710 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923"} Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.613740 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.615321 4805 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.615338 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.615952 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad" gracePeriod=15 Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.616184 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159" gracePeriod=15 Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.616265 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26" gracePeriod=15 Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.616311 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5" gracePeriod=15 Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.616351 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6" gracePeriod=15 Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.616781 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 16 11:59:11 crc kubenswrapper[4805]: E1216 11:59:11.617739 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.617780 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 16 11:59:11 crc kubenswrapper[4805]: E1216 11:59:11.617815 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.617848 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 16 11:59:11 crc kubenswrapper[4805]: E1216 11:59:11.617858 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.617866 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 16 11:59:11 crc kubenswrapper[4805]: E1216 11:59:11.617877 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.617885 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 16 11:59:11 crc kubenswrapper[4805]: E1216 11:59:11.617898 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.617925 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 16 11:59:11 crc kubenswrapper[4805]: E1216 11:59:11.617935 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.617942 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.618226 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.618243 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.618255 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.618266 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.618276 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.618285 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 16 11:59:11 crc kubenswrapper[4805]: E1216 11:59:11.618426 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.618438 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.671400 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.770040 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.770381 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.770435 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.770477 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.770518 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.770564 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.770611 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.770642 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.777016 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.998241 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.998323 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.998350 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.998376 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.998401 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.998453 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.998490 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:11 crc kubenswrapper[4805]: I1216 11:59:11.998528 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.001355 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.001434 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.001612 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.001633 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.001642 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.001667 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.001688 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.001715 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.001739 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.002078 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.020974 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.022252 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.023857 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6" exitCode=2 Dec 16 11:59:12 crc kubenswrapper[4805]: I1216 11:59:12.287600 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 11:59:13 crc kubenswrapper[4805]: I1216 11:59:13.031625 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 16 11:59:13 crc kubenswrapper[4805]: I1216 11:59:13.032835 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 16 11:59:13 crc kubenswrapper[4805]: I1216 11:59:13.034031 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26" exitCode=0 Dec 16 11:59:13 crc kubenswrapper[4805]: E1216 11:59:13.967636 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-5gm98.1881b018c25a4919\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{machine-config-daemon-5gm98.1881b018c25a4919 openshift-machine-config-operator 26741 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-5gm98,UID:ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9,APIVersion:v1,ResourceVersion:26731,FieldPath:spec.containers{machine-config-daemon},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-16 11:55:49 +0000 UTC,LastTimestamp:2025-12-16 11:59:13.966677053 +0000 UTC m=+227.684934858,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 16 11:59:14 crc kubenswrapper[4805]: I1216 11:59:14.042409 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 16 11:59:14 crc kubenswrapper[4805]: I1216 11:59:14.043978 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 16 11:59:14 crc kubenswrapper[4805]: I1216 11:59:14.044670 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159" exitCode=0 Dec 16 11:59:14 crc kubenswrapper[4805]: I1216 11:59:14.044692 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5" exitCode=0 Dec 16 11:59:14 crc kubenswrapper[4805]: I1216 11:59:14.044724 4805 scope.go:117] "RemoveContainer" containerID="f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.016845 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.018288 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.059986 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.061021 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad" exitCode=0 Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.061179 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.138666 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.138714 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.138777 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.138850 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.138876 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.139031 4805 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.139019 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.139042 4805 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.164384 4805 scope.go:117] "RemoveContainer" containerID="d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.231122 4805 scope.go:117] "RemoveContainer" containerID="f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca" Dec 16 11:59:15 crc kubenswrapper[4805]: E1216 11:59:15.232093 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\": container with ID starting with f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca not found: ID does not exist" containerID="f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.232136 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca"} err="failed to get container status \"f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\": rpc error: code = NotFound desc = could not find container \"f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\": container with ID starting with f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca not found: ID does not exist" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.232186 4805 scope.go:117] "RemoveContainer" containerID="1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.239715 4805 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.289500 4805 scope.go:117] "RemoveContainer" containerID="c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.333035 4805 scope.go:117] "RemoveContainer" containerID="6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.465594 4805 scope.go:117] "RemoveContainer" containerID="829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.554683 4805 scope.go:117] "RemoveContainer" containerID="3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.607354 4805 scope.go:117] "RemoveContainer" containerID="d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159" Dec 16 11:59:15 crc kubenswrapper[4805]: E1216 11:59:15.608073 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\": container with ID starting with d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159 not found: ID does not exist" containerID="d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.608130 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159"} err="failed to get container status \"d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\": rpc error: code = NotFound desc = could not find container \"d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159\": container with ID starting with d59d0767ba847281b1e275860770bc99156f77ab8e96655ad8f595346feaa159 not found: ID does not exist" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.608183 4805 scope.go:117] "RemoveContainer" containerID="f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.608571 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca"} err="failed to get container status \"f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\": rpc error: code = NotFound desc = could not find container \"f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca\": container with ID starting with f713bee9b8b0fe34ff02846d6ecb886305127a2617c9d4c7f5269dd319fdd3ca not found: ID does not exist" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.608587 4805 scope.go:117] "RemoveContainer" containerID="1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26" Dec 16 11:59:15 crc kubenswrapper[4805]: E1216 11:59:15.609033 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\": container with ID starting with 1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26 not found: ID does not exist" containerID="1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.609066 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26"} err="failed to get container status \"1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\": rpc error: code = NotFound desc = could not find container \"1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26\": container with ID starting with 1f7c546c04950278cd8931a2804aabb1b5369b1103fe61162a0ce7d9f545fe26 not found: ID does not exist" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.609079 4805 scope.go:117] "RemoveContainer" containerID="c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5" Dec 16 11:59:15 crc kubenswrapper[4805]: E1216 11:59:15.609285 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\": container with ID starting with c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5 not found: ID does not exist" containerID="c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.609306 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5"} err="failed to get container status \"c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\": rpc error: code = NotFound desc = could not find container \"c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5\": container with ID starting with c652c082f542b54fb867a07a89d86dcc94e1c3310e9f0ecf749b0fcb871628c5 not found: ID does not exist" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.609345 4805 scope.go:117] "RemoveContainer" containerID="6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6" Dec 16 11:59:15 crc kubenswrapper[4805]: E1216 11:59:15.609561 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\": container with ID starting with 6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6 not found: ID does not exist" containerID="6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.609590 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6"} err="failed to get container status \"6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\": rpc error: code = NotFound desc = could not find container \"6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6\": container with ID starting with 6de4fe171bbe35496817557323aa96dfb65060b9a84aa0c503df880e5bf48ea6 not found: ID does not exist" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.609620 4805 scope.go:117] "RemoveContainer" containerID="829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad" Dec 16 11:59:15 crc kubenswrapper[4805]: E1216 11:59:15.609809 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\": container with ID starting with 829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad not found: ID does not exist" containerID="829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.609831 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad"} err="failed to get container status \"829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\": rpc error: code = NotFound desc = could not find container \"829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad\": container with ID starting with 829b2de591b5e1528d1adb2378e6ccad2e20f9bde43c5ac9f1d8263b3f5342ad not found: ID does not exist" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.609868 4805 scope.go:117] "RemoveContainer" containerID="3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90" Dec 16 11:59:15 crc kubenswrapper[4805]: E1216 11:59:15.610380 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\": container with ID starting with 3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90 not found: ID does not exist" containerID="3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90" Dec 16 11:59:15 crc kubenswrapper[4805]: I1216 11:59:15.610429 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90"} err="failed to get container status \"3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\": rpc error: code = NotFound desc = could not find container \"3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90\": container with ID starting with 3abc57b04ffd1b0cb8f8c3edcd52542c15483cb2671881ad8b888b568a0c4c90 not found: ID does not exist" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.082412 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"4cf9ea01ecb876171a7dbc132a7c5a959cb0acd82019690d41d23472548d1ff3"} Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.090472 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcqz5" event={"ID":"6c8a922f-a887-401f-9f22-18355a0a81d7","Type":"ContainerStarted","Data":"ce043966e2cb9f1f3a66b143f56141b501cea8e88cb615b9a7e5be7f764de412"} Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.095729 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fcgh" event={"ID":"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0","Type":"ContainerStarted","Data":"fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc"} Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.101093 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sb75" event={"ID":"cf4f3221-afad-4a1d-8471-8018c2f08ddc","Type":"ContainerStarted","Data":"250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986"} Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.103990 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dc7vx" event={"ID":"89930a1c-e5ae-4885-9dfd-f9c10df38c8a","Type":"ContainerStarted","Data":"3c98354b3eee8f15a142afd565555d8604eeb830c9954971c1f98679bc8cb40d"} Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.109274 4805 generic.go:334] "Generic (PLEG): container finished" podID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerID="1f24b460f65b352832ceb1ccd9c0c6b4440ad0297b111900ea916ed53fb48d77" exitCode=0 Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.109344 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc2l" event={"ID":"3ed6f24b-f15e-4540-8fd2-1800a086a69e","Type":"ContainerDied","Data":"1f24b460f65b352832ceb1ccd9c0c6b4440ad0297b111900ea916ed53fb48d77"} Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.112235 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31"} Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.112269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5e749878bdb5e16915c77ef9d8fa86e75f1bd24c1ecb99982d78c71f9bcef666"} Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.113976 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9ppd" event={"ID":"7aba55d3-6790-4f26-9663-63bdf0c9991e","Type":"ContainerStarted","Data":"b636e768f00bcd211e2e548388359ca226e66cb19015998329afd1d8bb4d9e30"} Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.115325 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc8cv" event={"ID":"127838d6-328d-46cb-b942-6aed7bfd5048","Type":"ContainerStarted","Data":"3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1"} Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.531952 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.805224 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.805626 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.809483 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.810031 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.810489 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.810802 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.811190 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.811486 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.811837 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.812323 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.812704 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.813055 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.813458 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.813867 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.814360 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.814916 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.815322 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.815824 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.816322 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:16 crc kubenswrapper[4805]: I1216 11:59:16.816585 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.122018 4805 generic.go:334] "Generic (PLEG): container finished" podID="21c4a860-6898-4e21-9425-9925d0b3380b" containerID="c7c3e6305ed15815bb98ba8b9ae77b0913d6b33fd0a17964912fb56eca048670" exitCode=0 Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.122117 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21c4a860-6898-4e21-9425-9925d0b3380b","Type":"ContainerDied","Data":"c7c3e6305ed15815bb98ba8b9ae77b0913d6b33fd0a17964912fb56eca048670"} Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.122986 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.123492 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.123927 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.124306 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.124603 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.124881 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.125105 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.125360 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.125622 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.125871 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.126305 4805 generic.go:334] "Generic (PLEG): container finished" podID="7aba55d3-6790-4f26-9663-63bdf0c9991e" containerID="b636e768f00bcd211e2e548388359ca226e66cb19015998329afd1d8bb4d9e30" exitCode=0 Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.126344 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9ppd" event={"ID":"7aba55d3-6790-4f26-9663-63bdf0c9991e","Type":"ContainerDied","Data":"b636e768f00bcd211e2e548388359ca226e66cb19015998329afd1d8bb4d9e30"} Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.126905 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.127153 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.127446 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.127781 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.128042 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.128409 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.128648 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.128882 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.130401 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.131193 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.131922 4805 generic.go:334] "Generic (PLEG): container finished" podID="127838d6-328d-46cb-b942-6aed7bfd5048" containerID="3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1" exitCode=0 Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.132001 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc8cv" event={"ID":"127838d6-328d-46cb-b942-6aed7bfd5048","Type":"ContainerDied","Data":"3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1"} Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.133865 4805 generic.go:334] "Generic (PLEG): container finished" podID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerID="fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc" exitCode=0 Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.133940 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fcgh" event={"ID":"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0","Type":"ContainerDied","Data":"fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc"} Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.134640 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.134813 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.135051 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.135372 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.137525 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.138014 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.138396 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.139209 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.139483 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.139703 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.143402 4805 generic.go:334] "Generic (PLEG): container finished" podID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerID="3c98354b3eee8f15a142afd565555d8604eeb830c9954971c1f98679bc8cb40d" exitCode=0 Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.143589 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dc7vx" event={"ID":"89930a1c-e5ae-4885-9dfd-f9c10df38c8a","Type":"ContainerDied","Data":"3c98354b3eee8f15a142afd565555d8604eeb830c9954971c1f98679bc8cb40d"} Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.145222 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.145822 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.147166 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.151764 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.152640 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.153512 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.154193 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.154988 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.155679 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:17 crc kubenswrapper[4805]: I1216 11:59:17.156350 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.180358 4805 generic.go:334] "Generic (PLEG): container finished" podID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerID="250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986" exitCode=0 Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.180547 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sb75" event={"ID":"cf4f3221-afad-4a1d-8471-8018c2f08ddc","Type":"ContainerDied","Data":"250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986"} Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.181771 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.181992 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.182355 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.182605 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.182866 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.183031 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.183223 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.183909 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.187027 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.187414 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.187905 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc2l" event={"ID":"3ed6f24b-f15e-4540-8fd2-1800a086a69e","Type":"ContainerStarted","Data":"f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a"} Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.188988 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.189204 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.189374 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.189700 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.190294 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.190572 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.190786 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.190999 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.191235 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.191426 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.191826 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc8cv" event={"ID":"127838d6-328d-46cb-b942-6aed7bfd5048","Type":"ContainerStarted","Data":"a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37"} Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.192498 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.192701 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.192880 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.193076 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.193358 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.193544 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.193702 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.193881 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.194219 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:18 crc kubenswrapper[4805]: I1216 11:59:18.194552 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.203428 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9ppd" event={"ID":"7aba55d3-6790-4f26-9663-63bdf0c9991e","Type":"ContainerStarted","Data":"5689b0aa9b836b21a7a2b6423def6bee340a8db7c011c9474da0e99abec5ddc1"} Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.204570 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.204862 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.205273 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.205494 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.205801 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.206080 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fcgh" event={"ID":"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0","Type":"ContainerStarted","Data":"e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612"} Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.206236 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.206574 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.213844 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.223353 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.223825 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.224116 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.224316 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.224483 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.224635 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.224791 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.224935 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.225082 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.225308 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.225471 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"21c4a860-6898-4e21-9425-9925d0b3380b","Type":"ContainerDied","Data":"09ab325740cade80d7791920865e7cf35df16b2983d7e195ff3253b7865fcb5b"} Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.225512 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09ab325740cade80d7791920865e7cf35df16b2983d7e195ff3253b7865fcb5b" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.225585 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.225768 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.253701 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.254493 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.254901 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.255165 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.255399 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.255641 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.255856 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.256065 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.256317 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.256527 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.256746 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.274563 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21c4a860-6898-4e21-9425-9925d0b3380b-kube-api-access\") pod \"21c4a860-6898-4e21-9425-9925d0b3380b\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.274625 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-kubelet-dir\") pod \"21c4a860-6898-4e21-9425-9925d0b3380b\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.274668 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-var-lock\") pod \"21c4a860-6898-4e21-9425-9925d0b3380b\" (UID: \"21c4a860-6898-4e21-9425-9925d0b3380b\") " Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.275209 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-var-lock" (OuterVolumeSpecName: "var-lock") pod "21c4a860-6898-4e21-9425-9925d0b3380b" (UID: "21c4a860-6898-4e21-9425-9925d0b3380b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.277212 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "21c4a860-6898-4e21-9425-9925d0b3380b" (UID: "21c4a860-6898-4e21-9425-9925d0b3380b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.365463 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c4a860-6898-4e21-9425-9925d0b3380b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "21c4a860-6898-4e21-9425-9925d0b3380b" (UID: "21c4a860-6898-4e21-9425-9925d0b3380b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.375673 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21c4a860-6898-4e21-9425-9925d0b3380b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.375717 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 16 11:59:19 crc kubenswrapper[4805]: I1216 11:59:19.375726 4805 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/21c4a860-6898-4e21-9425-9925d0b3380b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.266168 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dc7vx" event={"ID":"89930a1c-e5ae-4885-9dfd-f9c10df38c8a","Type":"ContainerStarted","Data":"f28fbe94b0e8a821c9983c9fecddd2c344386df434a822703386014c70a5e3a1"} Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.266183 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.267988 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.268223 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.268390 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.268530 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.268672 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.268816 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.268973 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.269115 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.269297 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.269457 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.281050 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.281530 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.281912 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.282198 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.282512 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.282740 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.283032 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.283335 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.283661 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:20 crc kubenswrapper[4805]: I1216 11:59:20.283900 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.281867 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sb75" event={"ID":"cf4f3221-afad-4a1d-8471-8018c2f08ddc","Type":"ContainerStarted","Data":"647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342"} Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.284319 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.284946 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.285314 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.285825 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.286355 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.286846 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.287096 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.287356 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.291072 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.291691 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: E1216 11:59:21.318951 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: E1216 11:59:21.319305 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: E1216 11:59:21.319541 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: E1216 11:59:21.319740 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: E1216 11:59:21.319941 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.319963 4805 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 16 11:59:21 crc kubenswrapper[4805]: E1216 11:59:21.320155 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="200ms" Dec 16 11:59:21 crc kubenswrapper[4805]: E1216 11:59:21.521197 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="400ms" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.665088 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.666055 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.727530 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.728060 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.728254 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.728661 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.729124 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.729359 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.729584 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.729771 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.729960 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.730162 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.730368 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.832228 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:59:21 crc kubenswrapper[4805]: I1216 11:59:21.832286 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:59:21 crc kubenswrapper[4805]: E1216 11:59:21.922319 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="800ms" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.343416 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mcqz5" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.343831 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.343996 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.344271 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.344572 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.344910 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.345431 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.345623 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.345776 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.345920 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.346061 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.605802 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.605866 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.657818 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.658717 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.659100 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.659404 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.659740 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.660000 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.660297 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.660580 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.660792 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.661048 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.661452 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: E1216 11:59:22.737790 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="1.6s" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.821748 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.821797 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.863675 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.864388 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.864847 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.865222 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.865489 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.865769 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.866026 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.866322 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.866590 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.866923 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.867300 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:22 crc kubenswrapper[4805]: I1216 11:59:22.881566 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dc7vx" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerName="registry-server" probeResult="failure" output=< Dec 16 11:59:22 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 11:59:22 crc kubenswrapper[4805]: > Dec 16 11:59:23 crc kubenswrapper[4805]: E1216 11:59:23.261098 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:59:23Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:59:23Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:59:23Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-16T11:59:23Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[],\\\"sizeBytes\\\":1640596312},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1470e71026c2350281e7e8bf304ff9845476ba3f16dddcbb8ebb7cb68e77d671\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:1c7f53579bc0ec4af412f8157b4b1d90cc3e97c0ed16507f6e9bf22b01b9721f\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1200933213},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1bd771794f1785eb9137335fe2468e49b18e63fd12105305f837af4ebbe97e2e\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c29a0091864f17621dc93217af0c0ad31d9ade4837c2e1c2161172d691818df9\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1152844048},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: E1216 11:59:23.262126 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: E1216 11:59:23.262555 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: E1216 11:59:23.262842 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: E1216 11:59:23.263185 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: E1216 11:59:23.263219 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.340917 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.341603 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.341952 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.342535 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.342984 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.343321 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.343661 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.344299 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.344808 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.345090 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.345345 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.345600 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.345981 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.346462 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.346767 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.347000 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.347264 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.347558 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.347777 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.348029 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.348314 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: I1216 11:59:23.348579 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:23 crc kubenswrapper[4805]: E1216 11:59:23.796084 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-5gm98.1881b018c25a4919\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{machine-config-daemon-5gm98.1881b018c25a4919 openshift-machine-config-operator 26741 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-5gm98,UID:ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9,APIVersion:v1,ResourceVersion:26731,FieldPath:spec.containers{machine-config-daemon},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-16 11:55:49 +0000 UTC,LastTimestamp:2025-12-16 11:59:13.966677053 +0000 UTC m=+227.684934858,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 16 11:59:24 crc kubenswrapper[4805]: E1216 11:59:24.339369 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="3.2s" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.522611 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.524223 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.524594 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.525081 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.525460 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.526744 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.526987 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.527238 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.527510 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.527787 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.527971 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.540918 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="28ecaa42-ea80-45ad-af58-4ae0b0c48ac0" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.541160 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="28ecaa42-ea80-45ad-af58-4ae0b0c48ac0" Dec 16 11:59:24 crc kubenswrapper[4805]: E1216 11:59:24.541885 4805 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.542452 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.985309 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:59:24 crc kubenswrapper[4805]: I1216 11:59:24.985351 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.033816 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.034420 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.034842 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.035159 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.035446 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.035733 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.036008 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.036251 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.036518 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.036792 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.037047 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.061550 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.061848 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.062919 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.062948 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.299861 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0b61dd018d9a6a60bbf183aa3a0601c0206a55ea55bca9add599a2804ca61608"} Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.336868 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.337355 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.337581 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.337863 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.338281 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.338459 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.338633 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.338802 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.338971 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.339130 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:25 crc kubenswrapper[4805]: I1216 11:59:25.339338 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.108494 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4fcgh" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerName="registry-server" probeResult="failure" output=< Dec 16 11:59:26 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 11:59:26 crc kubenswrapper[4805]: > Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.114363 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8sb75" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerName="registry-server" probeResult="failure" output=< Dec 16 11:59:26 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 11:59:26 crc kubenswrapper[4805]: > Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.237004 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.237103 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.531050 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.531544 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.531922 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.532280 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.532719 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.532907 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.533055 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.533233 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.533379 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.533519 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.539413 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.613176 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 16 11:59:26 crc kubenswrapper[4805]: I1216 11:59:26.613255 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.357710 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.357767 4805 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030" exitCode=1 Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.357834 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030"} Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.358384 4805 scope.go:117] "RemoveContainer" containerID="4e8ce6db3f65c1a4dbc567d4d855f81a4dea6f029a31d1da092ac6d33f28f030" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.358708 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.358987 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.360109 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"239f625a47eae0569fc07cc42403750256aa199ebfb0838c0a5d824da1dd5932"} Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.360217 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.360480 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.360733 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.361163 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.361585 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.361818 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.362041 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.362417 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.362800 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: I1216 11:59:27.363135 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:27 crc kubenswrapper[4805]: E1216 11:59:27.540725 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="6.4s" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.372517 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.374319 4805 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="239f625a47eae0569fc07cc42403750256aa199ebfb0838c0a5d824da1dd5932" exitCode=0 Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.374359 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"239f625a47eae0569fc07cc42403750256aa199ebfb0838c0a5d824da1dd5932"} Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.374790 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="28ecaa42-ea80-45ad-af58-4ae0b0c48ac0" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.374830 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="28ecaa42-ea80-45ad-af58-4ae0b0c48ac0" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.375170 4805 status_manager.go:851] "Failed to get status for pod" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" pod="openshift-marketplace/redhat-marketplace-lhc2l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lhc2l\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:28 crc kubenswrapper[4805]: E1216 11:59:28.375360 4805 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.375593 4805 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.375893 4805 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.376630 4805 status_manager.go:851] "Failed to get status for pod" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" pod="openshift-marketplace/redhat-operators-4fcgh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4fcgh\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.377003 4805 status_manager.go:851] "Failed to get status for pod" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" pod="openshift-marketplace/community-operators-mcqz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mcqz5\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.377346 4805 status_manager.go:851] "Failed to get status for pod" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5gm98\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.377619 4805 status_manager.go:851] "Failed to get status for pod" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" pod="openshift-marketplace/certified-operators-jc8cv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jc8cv\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.378016 4805 status_manager.go:851] "Failed to get status for pod" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" pod="openshift-marketplace/community-operators-dc7vx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dc7vx\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.378384 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.378769 4805 status_manager.go:851] "Failed to get status for pod" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" pod="openshift-marketplace/certified-operators-q9ppd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q9ppd\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.379102 4805 status_manager.go:851] "Failed to get status for pod" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" pod="openshift-marketplace/redhat-operators-8sb75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8sb75\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:28 crc kubenswrapper[4805]: I1216 11:59:28.379494 4805 status_manager.go:851] "Failed to get status for pod" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Dec 16 11:59:29 crc kubenswrapper[4805]: I1216 11:59:29.380734 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6ad100377aad38d2b29eae9dcef03c390a9d206a5b6f67e18cfc38cc988e041f"} Dec 16 11:59:29 crc kubenswrapper[4805]: I1216 11:59:29.384595 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 16 11:59:29 crc kubenswrapper[4805]: I1216 11:59:29.384642 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"13075b3b2979fddda4bce53b6d13b176f81b9b2010c01149610dd4c66de0b818"} Dec 16 11:59:31 crc kubenswrapper[4805]: I1216 11:59:31.597947 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3a7accd15be6369c622217a68872f581ce4d460af5601154eb5e825442a73fd6"} Dec 16 11:59:31 crc kubenswrapper[4805]: I1216 11:59:31.598493 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bd38ec958298869aacef44deb20b0a3abdebab6b842b66331bf7b3d24ffb088c"} Dec 16 11:59:31 crc kubenswrapper[4805]: I1216 11:59:31.894770 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:59:32 crc kubenswrapper[4805]: I1216 11:59:32.032720 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dc7vx" Dec 16 11:59:32 crc kubenswrapper[4805]: I1216 11:59:32.698718 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="28ecaa42-ea80-45ad-af58-4ae0b0c48ac0" Dec 16 11:59:32 crc kubenswrapper[4805]: I1216 11:59:32.698751 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="28ecaa42-ea80-45ad-af58-4ae0b0c48ac0" Dec 16 11:59:32 crc kubenswrapper[4805]: I1216 11:59:32.698933 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c339faa12486bcd56f7a09fad1b9b0ba0ca737d380335c5f32232fac2087a7c5"} Dec 16 11:59:32 crc kubenswrapper[4805]: I1216 11:59:32.698959 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f6acf271b2af4eb963b8e4abecb5dec31960d5392752a894f8f4ccc3587e28e3"} Dec 16 11:59:32 crc kubenswrapper[4805]: I1216 11:59:32.698987 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:34 crc kubenswrapper[4805]: I1216 11:59:34.579263 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:34 crc kubenswrapper[4805]: I1216 11:59:34.579582 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:34 crc kubenswrapper[4805]: I1216 11:59:34.583902 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:35 crc kubenswrapper[4805]: I1216 11:59:35.242523 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:59:35 crc kubenswrapper[4805]: I1216 11:59:35.287487 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:59:35 crc kubenswrapper[4805]: I1216 11:59:35.345966 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 11:59:35 crc kubenswrapper[4805]: I1216 11:59:35.369397 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 11:59:36 crc kubenswrapper[4805]: I1216 11:59:36.236615 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:59:36 crc kubenswrapper[4805]: I1216 11:59:36.483402 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:59:36 crc kubenswrapper[4805]: I1216 11:59:36.487993 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:59:38 crc kubenswrapper[4805]: I1216 11:59:38.353382 4805 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:38 crc kubenswrapper[4805]: I1216 11:59:38.833931 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="28ecaa42-ea80-45ad-af58-4ae0b0c48ac0" Dec 16 11:59:38 crc kubenswrapper[4805]: I1216 11:59:38.833963 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="28ecaa42-ea80-45ad-af58-4ae0b0c48ac0" Dec 16 11:59:38 crc kubenswrapper[4805]: I1216 11:59:38.850348 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:39 crc kubenswrapper[4805]: I1216 11:59:39.690828 4805 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7efd1bf0-ae0f-426c-b29b-dfe1d2f10f49" Dec 16 11:59:39 crc kubenswrapper[4805]: I1216 11:59:39.839774 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="28ecaa42-ea80-45ad-af58-4ae0b0c48ac0" Dec 16 11:59:39 crc kubenswrapper[4805]: I1216 11:59:39.840069 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="28ecaa42-ea80-45ad-af58-4ae0b0c48ac0" Dec 16 11:59:39 crc kubenswrapper[4805]: I1216 11:59:39.898803 4805 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7efd1bf0-ae0f-426c-b29b-dfe1d2f10f49" Dec 16 11:59:46 crc kubenswrapper[4805]: I1216 11:59:46.241048 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 16 11:59:48 crc kubenswrapper[4805]: I1216 11:59:48.602533 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 16 11:59:49 crc kubenswrapper[4805]: I1216 11:59:49.121329 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 16 11:59:49 crc kubenswrapper[4805]: I1216 11:59:49.420306 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 16 11:59:49 crc kubenswrapper[4805]: I1216 11:59:49.715204 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 16 11:59:50 crc kubenswrapper[4805]: I1216 11:59:50.137939 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 16 11:59:50 crc kubenswrapper[4805]: I1216 11:59:50.194933 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 16 11:59:50 crc kubenswrapper[4805]: I1216 11:59:50.648513 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 16 11:59:50 crc kubenswrapper[4805]: I1216 11:59:50.940826 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 16 11:59:50 crc kubenswrapper[4805]: I1216 11:59:50.960768 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 16 11:59:50 crc kubenswrapper[4805]: I1216 11:59:50.983436 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.073016 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.148881 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.163017 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.181118 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.187075 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.224719 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.247664 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.316634 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.373785 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.481696 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.633847 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.675194 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.738233 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.750410 4805 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.815633 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.864454 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.914864 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 16 11:59:51 crc kubenswrapper[4805]: I1216 11:59:51.928997 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.073774 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.177072 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.177790 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.247122 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.339530 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.348874 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.372218 4805 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.378997 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.392172 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.471894 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.497192 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.513641 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.540128 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.615337 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.668514 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.720031 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.738336 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.873698 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.985955 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 16 11:59:52 crc kubenswrapper[4805]: I1216 11:59:52.993825 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.070379 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.132251 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.132324 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.141746 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.203495 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.249483 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.285969 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.364719 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.456301 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.533557 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.591536 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.635639 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.801028 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.815620 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.905309 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 16 11:59:53 crc kubenswrapper[4805]: I1216 11:59:53.981284 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 16 11:59:54 crc kubenswrapper[4805]: I1216 11:59:54.013242 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 16 11:59:54 crc kubenswrapper[4805]: I1216 11:59:54.159134 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 16 11:59:54 crc kubenswrapper[4805]: I1216 11:59:54.334566 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 16 11:59:54 crc kubenswrapper[4805]: I1216 11:59:54.364279 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 16 11:59:54 crc kubenswrapper[4805]: I1216 11:59:54.523633 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 16 11:59:54 crc kubenswrapper[4805]: I1216 11:59:54.535506 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 16 11:59:54 crc kubenswrapper[4805]: I1216 11:59:54.635368 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 16 11:59:54 crc kubenswrapper[4805]: I1216 11:59:54.724595 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 16 11:59:54 crc kubenswrapper[4805]: I1216 11:59:54.741972 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 16 11:59:54 crc kubenswrapper[4805]: I1216 11:59:54.785680 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 16 11:59:54 crc kubenswrapper[4805]: I1216 11:59:54.933496 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.113641 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.121995 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.192657 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.305651 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.349459 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.374951 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.433546 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.504054 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.508726 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.533011 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.544452 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.552447 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.575970 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.577114 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.578083 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.578126 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.633584 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.642031 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.681320 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.709026 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.766313 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.797959 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.801288 4805 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.820458 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.896582 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.905742 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 16 11:59:55 crc kubenswrapper[4805]: I1216 11:59:55.991216 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.117928 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.151704 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.187582 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.223428 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.316319 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.662264 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.699814 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.715536 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.723055 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.811410 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.812199 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.977088 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 16 11:59:56 crc kubenswrapper[4805]: I1216 11:59:56.977672 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.048326 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.059627 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.076859 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.088977 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.330330 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.347415 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.355477 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.467643 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.467887 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.658794 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.841931 4805 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.845924 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 16 11:59:57 crc kubenswrapper[4805]: I1216 11:59:57.935820 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.018890 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.073038 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.089519 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.171891 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.207593 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.247270 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.297656 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.338797 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.369584 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.456413 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.547061 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.640045 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.656558 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.694661 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.695509 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.781502 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.851163 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.869208 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 16 11:59:58 crc kubenswrapper[4805]: I1216 11:59:58.913341 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.068279 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.076675 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.234442 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.290884 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.418065 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.429382 4805 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.430616 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jc8cv" podStartSLOduration=50.241379105 podStartE2EDuration="2m38.430590695s" podCreationTimestamp="2025-12-16 11:57:21 +0000 UTC" firstStartedPulling="2025-12-16 11:57:28.699784342 +0000 UTC m=+122.418042147" lastFinishedPulling="2025-12-16 11:59:16.888995932 +0000 UTC m=+230.607253737" observedRunningTime="2025-12-16 11:59:38.385293022 +0000 UTC m=+252.103550837" watchObservedRunningTime="2025-12-16 11:59:59.430590695 +0000 UTC m=+273.148848560" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.431549 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8sb75" podStartSLOduration=44.831453531 podStartE2EDuration="2m36.431537852s" podCreationTimestamp="2025-12-16 11:57:23 +0000 UTC" firstStartedPulling="2025-12-16 11:57:28.675338177 +0000 UTC m=+122.393595982" lastFinishedPulling="2025-12-16 11:59:20.275422498 +0000 UTC m=+233.993680303" observedRunningTime="2025-12-16 11:59:39.720123651 +0000 UTC m=+253.438381466" watchObservedRunningTime="2025-12-16 11:59:59.431537852 +0000 UTC m=+273.149795717" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.434264 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4fcgh" podStartSLOduration=48.467249366 podStartE2EDuration="2m36.434250611s" podCreationTimestamp="2025-12-16 11:57:23 +0000 UTC" firstStartedPulling="2025-12-16 11:57:29.769878012 +0000 UTC m=+123.488135817" lastFinishedPulling="2025-12-16 11:59:17.736879257 +0000 UTC m=+231.455137062" observedRunningTime="2025-12-16 11:59:39.830842593 +0000 UTC m=+253.549100398" watchObservedRunningTime="2025-12-16 11:59:59.434250611 +0000 UTC m=+273.152508476" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.434690 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lhc2l" podStartSLOduration=47.1904391 podStartE2EDuration="2m35.434683213s" podCreationTimestamp="2025-12-16 11:57:24 +0000 UTC" firstStartedPulling="2025-12-16 11:57:28.682788502 +0000 UTC m=+122.401046307" lastFinishedPulling="2025-12-16 11:59:16.927032615 +0000 UTC m=+230.645290420" observedRunningTime="2025-12-16 11:59:39.763848767 +0000 UTC m=+253.482106592" watchObservedRunningTime="2025-12-16 11:59:59.434683213 +0000 UTC m=+273.152941078" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.435560 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=48.435550038 podStartE2EDuration="48.435550038s" podCreationTimestamp="2025-12-16 11:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:59:39.783998786 +0000 UTC m=+253.502256591" watchObservedRunningTime="2025-12-16 11:59:59.435550038 +0000 UTC m=+273.153807913" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.436363 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dc7vx" podStartSLOduration=48.471362701 podStartE2EDuration="2m38.436356511s" podCreationTimestamp="2025-12-16 11:57:21 +0000 UTC" firstStartedPulling="2025-12-16 11:57:28.701377168 +0000 UTC m=+122.419634973" lastFinishedPulling="2025-12-16 11:59:18.666370978 +0000 UTC m=+232.384628783" observedRunningTime="2025-12-16 11:59:38.947837643 +0000 UTC m=+252.666095488" watchObservedRunningTime="2025-12-16 11:59:59.436356511 +0000 UTC m=+273.154614376" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.437476 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q9ppd" podStartSLOduration=49.226460929 podStartE2EDuration="2m38.437464543s" podCreationTimestamp="2025-12-16 11:57:21 +0000 UTC" firstStartedPulling="2025-12-16 11:57:28.683686918 +0000 UTC m=+122.401944723" lastFinishedPulling="2025-12-16 11:59:17.894690532 +0000 UTC m=+231.612948337" observedRunningTime="2025-12-16 11:59:39.681197812 +0000 UTC m=+253.399455617" watchObservedRunningTime="2025-12-16 11:59:59.437464543 +0000 UTC m=+273.155722448" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.437673 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mcqz5" podStartSLOduration=52.978799761 podStartE2EDuration="2m38.437665119s" podCreationTimestamp="2025-12-16 11:57:21 +0000 UTC" firstStartedPulling="2025-12-16 11:57:29.778519441 +0000 UTC m=+123.496777256" lastFinishedPulling="2025-12-16 11:59:15.237384809 +0000 UTC m=+228.955642614" observedRunningTime="2025-12-16 11:59:39.867571998 +0000 UTC m=+253.585829803" watchObservedRunningTime="2025-12-16 11:59:59.437665119 +0000 UTC m=+273.155922984" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.438933 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.438996 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.446226 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.468020 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.469410 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.469387374 podStartE2EDuration="21.469387374s" podCreationTimestamp="2025-12-16 11:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 11:59:59.467832649 +0000 UTC m=+273.186090454" watchObservedRunningTime="2025-12-16 11:59:59.469387374 +0000 UTC m=+273.187645219" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.532058 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.539698 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.643971 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.680890 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.691631 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.733646 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.986260 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 16 11:59:59 crc kubenswrapper[4805]: I1216 11:59:59.998298 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.037409 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.072741 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.148014 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.206301 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.322964 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.447523 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.495380 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.530241 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.549788 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.568562 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.724730 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.769502 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.784487 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.786904 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.793040 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.865654 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.869542 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 16 12:00:00 crc kubenswrapper[4805]: I1216 12:00:00.874261 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.068401 4805 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.160313 4805 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.160534 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31" gracePeriod=5 Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.284440 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.304294 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.333813 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.443379 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.511621 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.740818 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.763495 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.795046 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.797609 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.819721 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.945072 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 16 12:00:01 crc kubenswrapper[4805]: I1216 12:00:01.984846 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.039679 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.041655 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.270524 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.283995 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.289632 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.317830 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.436061 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.441042 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.523299 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.542527 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.833327 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.851886 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.914511 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.915311 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.917258 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 16 12:00:02 crc kubenswrapper[4805]: I1216 12:00:02.983770 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 16 12:00:03 crc kubenswrapper[4805]: I1216 12:00:03.022962 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 16 12:00:03 crc kubenswrapper[4805]: I1216 12:00:03.142629 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 16 12:00:03 crc kubenswrapper[4805]: I1216 12:00:03.208588 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 16 12:00:03 crc kubenswrapper[4805]: I1216 12:00:03.252597 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 16 12:00:03 crc kubenswrapper[4805]: I1216 12:00:03.326381 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 16 12:00:03 crc kubenswrapper[4805]: I1216 12:00:03.612505 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 16 12:00:04 crc kubenswrapper[4805]: I1216 12:00:04.155884 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 16 12:00:04 crc kubenswrapper[4805]: I1216 12:00:04.232874 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.750556 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.750851 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.902280 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.902571 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.902739 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.902862 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.903022 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.902463 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.902665 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.902800 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.902921 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.903780 4805 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.903910 4805 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.904002 4805 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.904095 4805 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:06 crc kubenswrapper[4805]: I1216 12:00:06.910217 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:00:07 crc kubenswrapper[4805]: I1216 12:00:07.005909 4805 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:07 crc kubenswrapper[4805]: I1216 12:00:07.030007 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 16 12:00:07 crc kubenswrapper[4805]: I1216 12:00:07.030064 4805 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31" exitCode=137 Dec 16 12:00:07 crc kubenswrapper[4805]: I1216 12:00:07.030108 4805 scope.go:117] "RemoveContainer" containerID="6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31" Dec 16 12:00:07 crc kubenswrapper[4805]: I1216 12:00:07.030253 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 16 12:00:07 crc kubenswrapper[4805]: I1216 12:00:07.045537 4805 scope.go:117] "RemoveContainer" containerID="6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31" Dec 16 12:00:07 crc kubenswrapper[4805]: E1216 12:00:07.045938 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31\": container with ID starting with 6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31 not found: ID does not exist" containerID="6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31" Dec 16 12:00:07 crc kubenswrapper[4805]: I1216 12:00:07.045965 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31"} err="failed to get container status \"6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31\": rpc error: code = NotFound desc = could not find container \"6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31\": container with ID starting with 6c979989cc5760b301393b113e691cec07136ecfc9751f155ae3f41eb3cb3c31 not found: ID does not exist" Dec 16 12:00:08 crc kubenswrapper[4805]: I1216 12:00:08.532867 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 16 12:00:08 crc kubenswrapper[4805]: I1216 12:00:08.533929 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 16 12:00:08 crc kubenswrapper[4805]: I1216 12:00:08.553880 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 16 12:00:08 crc kubenswrapper[4805]: I1216 12:00:08.553950 4805 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="fdc6bc15-4245-496a-92dc-d8110c201cd7" Dec 16 12:00:08 crc kubenswrapper[4805]: I1216 12:00:08.557731 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 16 12:00:08 crc kubenswrapper[4805]: I1216 12:00:08.557782 4805 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="fdc6bc15-4245-496a-92dc-d8110c201cd7" Dec 16 12:00:08 crc kubenswrapper[4805]: I1216 12:00:08.904783 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 16 12:00:10 crc kubenswrapper[4805]: I1216 12:00:10.269413 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 16 12:00:10 crc kubenswrapper[4805]: I1216 12:00:10.369230 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 16 12:00:15 crc kubenswrapper[4805]: I1216 12:00:15.251061 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 16 12:00:15 crc kubenswrapper[4805]: I1216 12:00:15.482609 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 16 12:00:18 crc kubenswrapper[4805]: I1216 12:00:18.793358 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 16 12:00:19 crc kubenswrapper[4805]: I1216 12:00:19.957782 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 16 12:00:20 crc kubenswrapper[4805]: I1216 12:00:20.165644 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 16 12:00:20 crc kubenswrapper[4805]: I1216 12:00:20.627525 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.035463 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw"] Dec 16 12:00:21 crc kubenswrapper[4805]: E1216 12:00:21.036848 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" containerName="installer" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.036955 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" containerName="installer" Dec 16 12:00:21 crc kubenswrapper[4805]: E1216 12:00:21.037051 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.037125 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.037355 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c4a860-6898-4e21-9425-9925d0b3380b" containerName="installer" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.037431 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.037917 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.041928 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.042187 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.047023 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw"] Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.077960 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7f164bf-f966-44c7-82d2-1841b4a6cffe-config-volume\") pod \"collect-profiles-29431440-2d8fw\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.078056 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs8n9\" (UniqueName: \"kubernetes.io/projected/b7f164bf-f966-44c7-82d2-1841b4a6cffe-kube-api-access-gs8n9\") pod \"collect-profiles-29431440-2d8fw\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.078120 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7f164bf-f966-44c7-82d2-1841b4a6cffe-secret-volume\") pod \"collect-profiles-29431440-2d8fw\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.179261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7f164bf-f966-44c7-82d2-1841b4a6cffe-secret-volume\") pod \"collect-profiles-29431440-2d8fw\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.179385 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7f164bf-f966-44c7-82d2-1841b4a6cffe-config-volume\") pod \"collect-profiles-29431440-2d8fw\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.179523 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs8n9\" (UniqueName: \"kubernetes.io/projected/b7f164bf-f966-44c7-82d2-1841b4a6cffe-kube-api-access-gs8n9\") pod \"collect-profiles-29431440-2d8fw\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.181456 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7f164bf-f966-44c7-82d2-1841b4a6cffe-config-volume\") pod \"collect-profiles-29431440-2d8fw\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.193239 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7f164bf-f966-44c7-82d2-1841b4a6cffe-secret-volume\") pod \"collect-profiles-29431440-2d8fw\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.207027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs8n9\" (UniqueName: \"kubernetes.io/projected/b7f164bf-f966-44c7-82d2-1841b4a6cffe-kube-api-access-gs8n9\") pod \"collect-profiles-29431440-2d8fw\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.354864 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:21 crc kubenswrapper[4805]: E1216 12:00:21.508997 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25bbef6b_4746_41e9_83ef_20e9c54a7451.slice/crio-conmon-b41712d780d8adb4a1843e3f5a213406691aab1a9e575d76126bb28555213ae5.scope\": RecentStats: unable to find data in memory cache]" Dec 16 12:00:21 crc kubenswrapper[4805]: I1216 12:00:21.583419 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw"] Dec 16 12:00:22 crc kubenswrapper[4805]: I1216 12:00:22.113839 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 16 12:00:22 crc kubenswrapper[4805]: I1216 12:00:22.125529 4805 generic.go:334] "Generic (PLEG): container finished" podID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerID="b41712d780d8adb4a1843e3f5a213406691aab1a9e575d76126bb28555213ae5" exitCode=0 Dec 16 12:00:22 crc kubenswrapper[4805]: I1216 12:00:22.125615 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" event={"ID":"25bbef6b-4746-41e9-83ef-20e9c54a7451","Type":"ContainerDied","Data":"b41712d780d8adb4a1843e3f5a213406691aab1a9e575d76126bb28555213ae5"} Dec 16 12:00:22 crc kubenswrapper[4805]: I1216 12:00:22.126272 4805 scope.go:117] "RemoveContainer" containerID="b41712d780d8adb4a1843e3f5a213406691aab1a9e575d76126bb28555213ae5" Dec 16 12:00:22 crc kubenswrapper[4805]: I1216 12:00:22.129603 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29431440-2d8fw_b7f164bf-f966-44c7-82d2-1841b4a6cffe/collect-profiles/0.log" Dec 16 12:00:22 crc kubenswrapper[4805]: I1216 12:00:22.129654 4805 generic.go:334] "Generic (PLEG): container finished" podID="b7f164bf-f966-44c7-82d2-1841b4a6cffe" containerID="b30755934546dd9177672f2c5282aa2bd9ae99937412d7cfcd7485c7cc81f73c" exitCode=1 Dec 16 12:00:22 crc kubenswrapper[4805]: I1216 12:00:22.129682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" event={"ID":"b7f164bf-f966-44c7-82d2-1841b4a6cffe","Type":"ContainerDied","Data":"b30755934546dd9177672f2c5282aa2bd9ae99937412d7cfcd7485c7cc81f73c"} Dec 16 12:00:22 crc kubenswrapper[4805]: I1216 12:00:22.129706 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" event={"ID":"b7f164bf-f966-44c7-82d2-1841b4a6cffe","Type":"ContainerStarted","Data":"9ac30a2878809acccbf5000736f536feea2a372d62f12801eb19f9abebe703e4"} Dec 16 12:00:22 crc kubenswrapper[4805]: I1216 12:00:22.578109 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.145916 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sjff7_25bbef6b-4746-41e9-83ef-20e9c54a7451/marketplace-operator/1.log" Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.147224 4805 generic.go:334] "Generic (PLEG): container finished" podID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerID="57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19" exitCode=1 Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.147392 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" event={"ID":"25bbef6b-4746-41e9-83ef-20e9c54a7451","Type":"ContainerDied","Data":"57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19"} Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.147509 4805 scope.go:117] "RemoveContainer" containerID="b41712d780d8adb4a1843e3f5a213406691aab1a9e575d76126bb28555213ae5" Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.148318 4805 scope.go:117] "RemoveContainer" containerID="57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19" Dec 16 12:00:23 crc kubenswrapper[4805]: E1216 12:00:23.148643 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-sjff7_openshift-marketplace(25bbef6b-4746-41e9-83ef-20e9c54a7451)\"" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.358016 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29431440-2d8fw_b7f164bf-f966-44c7-82d2-1841b4a6cffe/collect-profiles/0.log" Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.358287 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.495245 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7f164bf-f966-44c7-82d2-1841b4a6cffe-secret-volume\") pod \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.495392 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs8n9\" (UniqueName: \"kubernetes.io/projected/b7f164bf-f966-44c7-82d2-1841b4a6cffe-kube-api-access-gs8n9\") pod \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.496253 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7f164bf-f966-44c7-82d2-1841b4a6cffe-config-volume\") pod \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\" (UID: \"b7f164bf-f966-44c7-82d2-1841b4a6cffe\") " Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.497010 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f164bf-f966-44c7-82d2-1841b4a6cffe-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7f164bf-f966-44c7-82d2-1841b4a6cffe" (UID: "b7f164bf-f966-44c7-82d2-1841b4a6cffe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.501323 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7f164bf-f966-44c7-82d2-1841b4a6cffe-kube-api-access-gs8n9" (OuterVolumeSpecName: "kube-api-access-gs8n9") pod "b7f164bf-f966-44c7-82d2-1841b4a6cffe" (UID: "b7f164bf-f966-44c7-82d2-1841b4a6cffe"). InnerVolumeSpecName "kube-api-access-gs8n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.501906 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7f164bf-f966-44c7-82d2-1841b4a6cffe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7f164bf-f966-44c7-82d2-1841b4a6cffe" (UID: "b7f164bf-f966-44c7-82d2-1841b4a6cffe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.597703 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7f164bf-f966-44c7-82d2-1841b4a6cffe-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.597996 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs8n9\" (UniqueName: \"kubernetes.io/projected/b7f164bf-f966-44c7-82d2-1841b4a6cffe-kube-api-access-gs8n9\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:23 crc kubenswrapper[4805]: I1216 12:00:23.598059 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7f164bf-f966-44c7-82d2-1841b4a6cffe-config-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:24 crc kubenswrapper[4805]: I1216 12:00:24.029762 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 16 12:00:24 crc kubenswrapper[4805]: I1216 12:00:24.154957 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29431440-2d8fw_b7f164bf-f966-44c7-82d2-1841b4a6cffe/collect-profiles/0.log" Dec 16 12:00:24 crc kubenswrapper[4805]: I1216 12:00:24.155096 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" Dec 16 12:00:24 crc kubenswrapper[4805]: I1216 12:00:24.155105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw" event={"ID":"b7f164bf-f966-44c7-82d2-1841b4a6cffe","Type":"ContainerDied","Data":"9ac30a2878809acccbf5000736f536feea2a372d62f12801eb19f9abebe703e4"} Dec 16 12:00:24 crc kubenswrapper[4805]: I1216 12:00:24.155234 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac30a2878809acccbf5000736f536feea2a372d62f12801eb19f9abebe703e4" Dec 16 12:00:24 crc kubenswrapper[4805]: I1216 12:00:24.157609 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sjff7_25bbef6b-4746-41e9-83ef-20e9c54a7451/marketplace-operator/1.log" Dec 16 12:00:25 crc kubenswrapper[4805]: I1216 12:00:25.016536 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 16 12:00:25 crc kubenswrapper[4805]: I1216 12:00:25.104668 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 16 12:00:25 crc kubenswrapper[4805]: I1216 12:00:25.944182 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 16 12:00:26 crc kubenswrapper[4805]: I1216 12:00:26.181528 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 16 12:00:26 crc kubenswrapper[4805]: I1216 12:00:26.330161 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 16 12:00:26 crc kubenswrapper[4805]: I1216 12:00:26.878708 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 16 12:00:26 crc kubenswrapper[4805]: I1216 12:00:26.993236 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 16 12:00:27 crc kubenswrapper[4805]: I1216 12:00:27.245504 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 16 12:00:27 crc kubenswrapper[4805]: I1216 12:00:27.460686 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 16 12:00:27 crc kubenswrapper[4805]: I1216 12:00:27.682851 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 16 12:00:28 crc kubenswrapper[4805]: I1216 12:00:28.897618 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 12:00:28 crc kubenswrapper[4805]: I1216 12:00:28.898499 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 12:00:28 crc kubenswrapper[4805]: I1216 12:00:28.899404 4805 scope.go:117] "RemoveContainer" containerID="57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19" Dec 16 12:00:28 crc kubenswrapper[4805]: E1216 12:00:28.899694 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-sjff7_openshift-marketplace(25bbef6b-4746-41e9-83ef-20e9c54a7451)\"" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" Dec 16 12:00:29 crc kubenswrapper[4805]: I1216 12:00:29.160872 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 16 12:00:29 crc kubenswrapper[4805]: I1216 12:00:29.184746 4805 scope.go:117] "RemoveContainer" containerID="57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19" Dec 16 12:00:29 crc kubenswrapper[4805]: E1216 12:00:29.184990 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-sjff7_openshift-marketplace(25bbef6b-4746-41e9-83ef-20e9c54a7451)\"" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" Dec 16 12:00:29 crc kubenswrapper[4805]: I1216 12:00:29.602753 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 16 12:00:30 crc kubenswrapper[4805]: I1216 12:00:30.907804 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.027082 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2"] Dec 16 12:00:31 crc kubenswrapper[4805]: E1216 12:00:31.027861 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f164bf-f966-44c7-82d2-1841b4a6cffe" containerName="collect-profiles" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.028043 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f164bf-f966-44c7-82d2-1841b4a6cffe" containerName="collect-profiles" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.028567 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f164bf-f966-44c7-82d2-1841b4a6cffe" containerName="collect-profiles" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.029512 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.036448 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.036926 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.044628 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2"] Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.198747 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-secret-volume\") pod \"collect-profiles-29431440-mc9g2\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.198783 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-config-volume\") pod \"collect-profiles-29431440-mc9g2\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.198833 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2wgc\" (UniqueName: \"kubernetes.io/projected/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-kube-api-access-c2wgc\") pod \"collect-profiles-29431440-mc9g2\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.299924 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-secret-volume\") pod \"collect-profiles-29431440-mc9g2\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.299990 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-config-volume\") pod \"collect-profiles-29431440-mc9g2\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.300056 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2wgc\" (UniqueName: \"kubernetes.io/projected/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-kube-api-access-c2wgc\") pod \"collect-profiles-29431440-mc9g2\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.301302 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-config-volume\") pod \"collect-profiles-29431440-mc9g2\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.310566 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-secret-volume\") pod \"collect-profiles-29431440-mc9g2\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.322389 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2wgc\" (UniqueName: \"kubernetes.io/projected/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-kube-api-access-c2wgc\") pod \"collect-profiles-29431440-mc9g2\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.355648 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.569076 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2"] Dec 16 12:00:31 crc kubenswrapper[4805]: I1216 12:00:31.993303 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 16 12:00:32 crc kubenswrapper[4805]: I1216 12:00:32.202679 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29431440-mc9g2_369f8d4d-bdd3-4d02-8869-93f0e5c6593f/collect-profiles/0.log" Dec 16 12:00:32 crc kubenswrapper[4805]: I1216 12:00:32.202744 4805 generic.go:334] "Generic (PLEG): container finished" podID="369f8d4d-bdd3-4d02-8869-93f0e5c6593f" containerID="903d00db4fe6781ab447ba234544ebe1e7e39550153631beddaa3fa6d3cb3322" exitCode=1 Dec 16 12:00:32 crc kubenswrapper[4805]: I1216 12:00:32.202780 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" event={"ID":"369f8d4d-bdd3-4d02-8869-93f0e5c6593f","Type":"ContainerDied","Data":"903d00db4fe6781ab447ba234544ebe1e7e39550153631beddaa3fa6d3cb3322"} Dec 16 12:00:32 crc kubenswrapper[4805]: I1216 12:00:32.202809 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" event={"ID":"369f8d4d-bdd3-4d02-8869-93f0e5c6593f","Type":"ContainerStarted","Data":"157360e70b1d9220325c0cd2a49f7e484286ee156cda7bdc5922c6461563bfa4"} Dec 16 12:00:32 crc kubenswrapper[4805]: I1216 12:00:32.529740 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 16 12:00:32 crc kubenswrapper[4805]: I1216 12:00:32.608254 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 16 12:00:33 crc kubenswrapper[4805]: I1216 12:00:33.411071 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29431440-mc9g2_369f8d4d-bdd3-4d02-8869-93f0e5c6593f/collect-profiles/0.log" Dec 16 12:00:33 crc kubenswrapper[4805]: I1216 12:00:33.411409 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:33 crc kubenswrapper[4805]: I1216 12:00:33.553294 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-secret-volume\") pod \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " Dec 16 12:00:33 crc kubenswrapper[4805]: I1216 12:00:33.553373 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-config-volume\") pod \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " Dec 16 12:00:33 crc kubenswrapper[4805]: I1216 12:00:33.553409 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2wgc\" (UniqueName: \"kubernetes.io/projected/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-kube-api-access-c2wgc\") pod \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\" (UID: \"369f8d4d-bdd3-4d02-8869-93f0e5c6593f\") " Dec 16 12:00:33 crc kubenswrapper[4805]: I1216 12:00:33.554204 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-config-volume" (OuterVolumeSpecName: "config-volume") pod "369f8d4d-bdd3-4d02-8869-93f0e5c6593f" (UID: "369f8d4d-bdd3-4d02-8869-93f0e5c6593f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:00:33 crc kubenswrapper[4805]: I1216 12:00:33.558049 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-kube-api-access-c2wgc" (OuterVolumeSpecName: "kube-api-access-c2wgc") pod "369f8d4d-bdd3-4d02-8869-93f0e5c6593f" (UID: "369f8d4d-bdd3-4d02-8869-93f0e5c6593f"). InnerVolumeSpecName "kube-api-access-c2wgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:33 crc kubenswrapper[4805]: I1216 12:00:33.558337 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "369f8d4d-bdd3-4d02-8869-93f0e5c6593f" (UID: "369f8d4d-bdd3-4d02-8869-93f0e5c6593f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:00:33 crc kubenswrapper[4805]: I1216 12:00:33.655674 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:33 crc kubenswrapper[4805]: I1216 12:00:33.655725 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:33 crc kubenswrapper[4805]: I1216 12:00:33.655743 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2wgc\" (UniqueName: \"kubernetes.io/projected/369f8d4d-bdd3-4d02-8869-93f0e5c6593f-kube-api-access-c2wgc\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:34 crc kubenswrapper[4805]: I1216 12:00:34.215576 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29431440-mc9g2_369f8d4d-bdd3-4d02-8869-93f0e5c6593f/collect-profiles/0.log" Dec 16 12:00:34 crc kubenswrapper[4805]: I1216 12:00:34.215698 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" event={"ID":"369f8d4d-bdd3-4d02-8869-93f0e5c6593f","Type":"ContainerDied","Data":"157360e70b1d9220325c0cd2a49f7e484286ee156cda7bdc5922c6461563bfa4"} Dec 16 12:00:34 crc kubenswrapper[4805]: I1216 12:00:34.215735 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="157360e70b1d9220325c0cd2a49f7e484286ee156cda7bdc5922c6461563bfa4" Dec 16 12:00:34 crc kubenswrapper[4805]: I1216 12:00:34.215802 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2" Dec 16 12:00:35 crc kubenswrapper[4805]: I1216 12:00:35.456509 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 16 12:00:35 crc kubenswrapper[4805]: I1216 12:00:35.651104 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 16 12:00:36 crc kubenswrapper[4805]: I1216 12:00:36.175400 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 16 12:00:36 crc kubenswrapper[4805]: I1216 12:00:36.696600 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 16 12:00:37 crc kubenswrapper[4805]: I1216 12:00:37.224876 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 16 12:00:42 crc kubenswrapper[4805]: I1216 12:00:42.525383 4805 scope.go:117] "RemoveContainer" containerID="57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19" Dec 16 12:00:42 crc kubenswrapper[4805]: I1216 12:00:42.720977 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 16 12:00:42 crc kubenswrapper[4805]: I1216 12:00:42.750436 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.531096 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jc8cv"] Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.531447 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jc8cv" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" containerName="registry-server" containerID="cri-o://a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37" gracePeriod=30 Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.539802 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q9ppd"] Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.540090 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q9ppd" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" containerName="registry-server" containerID="cri-o://5689b0aa9b836b21a7a2b6423def6bee340a8db7c011c9474da0e99abec5ddc1" gracePeriod=30 Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.558537 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dc7vx"] Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.559120 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dc7vx" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerName="registry-server" containerID="cri-o://f28fbe94b0e8a821c9983c9fecddd2c344386df434a822703386014c70a5e3a1" gracePeriod=30 Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.582553 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mcqz5"] Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.582761 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mcqz5" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" containerName="registry-server" containerID="cri-o://ce043966e2cb9f1f3a66b143f56141b501cea8e88cb615b9a7e5be7f764de412" gracePeriod=30 Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.602490 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sjff7"] Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.608065 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6h6x"] Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.608320 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l6h6x" podUID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerName="registry-server" containerID="cri-o://2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb" gracePeriod=30 Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.615280 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc2l"] Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.615673 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lhc2l" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerName="registry-server" containerID="cri-o://f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a" gracePeriod=30 Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.622595 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fcgh"] Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.622860 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4fcgh" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerName="registry-server" containerID="cri-o://e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612" gracePeriod=30 Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.630572 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g2w5p"] Dec 16 12:00:43 crc kubenswrapper[4805]: E1216 12:00:43.630862 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="369f8d4d-bdd3-4d02-8869-93f0e5c6593f" containerName="collect-profiles" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.630879 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="369f8d4d-bdd3-4d02-8869-93f0e5c6593f" containerName="collect-profiles" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.630991 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="369f8d4d-bdd3-4d02-8869-93f0e5c6593f" containerName="collect-profiles" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.631519 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.634267 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8sb75"] Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.634672 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8sb75" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerName="registry-server" containerID="cri-o://647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342" gracePeriod=30 Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.643933 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g2w5p"] Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.809167 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ec1e6ca1-a29e-4572-8326-f4119b22b30a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g2w5p\" (UID: \"ec1e6ca1-a29e-4572-8326-f4119b22b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.809227 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhk7n\" (UniqueName: \"kubernetes.io/projected/ec1e6ca1-a29e-4572-8326-f4119b22b30a-kube-api-access-vhk7n\") pod \"marketplace-operator-79b997595-g2w5p\" (UID: \"ec1e6ca1-a29e-4572-8326-f4119b22b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.809306 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec1e6ca1-a29e-4572-8326-f4119b22b30a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g2w5p\" (UID: \"ec1e6ca1-a29e-4572-8326-f4119b22b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.910126 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec1e6ca1-a29e-4572-8326-f4119b22b30a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g2w5p\" (UID: \"ec1e6ca1-a29e-4572-8326-f4119b22b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.911495 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ec1e6ca1-a29e-4572-8326-f4119b22b30a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g2w5p\" (UID: \"ec1e6ca1-a29e-4572-8326-f4119b22b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.912217 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhk7n\" (UniqueName: \"kubernetes.io/projected/ec1e6ca1-a29e-4572-8326-f4119b22b30a-kube-api-access-vhk7n\") pod \"marketplace-operator-79b997595-g2w5p\" (UID: \"ec1e6ca1-a29e-4572-8326-f4119b22b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.911416 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec1e6ca1-a29e-4572-8326-f4119b22b30a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g2w5p\" (UID: \"ec1e6ca1-a29e-4572-8326-f4119b22b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.916833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ec1e6ca1-a29e-4572-8326-f4119b22b30a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g2w5p\" (UID: \"ec1e6ca1-a29e-4572-8326-f4119b22b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.928582 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhk7n\" (UniqueName: \"kubernetes.io/projected/ec1e6ca1-a29e-4572-8326-f4119b22b30a-kube-api-access-vhk7n\") pod \"marketplace-operator-79b997595-g2w5p\" (UID: \"ec1e6ca1-a29e-4572-8326-f4119b22b30a\") " pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:43 crc kubenswrapper[4805]: I1216 12:00:43.952497 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.163618 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g2w5p"] Dec 16 12:00:44 crc kubenswrapper[4805]: W1216 12:00:44.263650 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec1e6ca1_a29e_4572_8326_f4119b22b30a.slice/crio-cc176e268d525d7fb935da2ebf2bb202bc360591d2b31cc73ff8e6471dc7d906 WatchSource:0}: Error finding container cc176e268d525d7fb935da2ebf2bb202bc360591d2b31cc73ff8e6471dc7d906: Status 404 returned error can't find the container with id cc176e268d525d7fb935da2ebf2bb202bc360591d2b31cc73ff8e6471dc7d906 Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.304496 4805 generic.go:334] "Generic (PLEG): container finished" podID="6c8a922f-a887-401f-9f22-18355a0a81d7" containerID="ce043966e2cb9f1f3a66b143f56141b501cea8e88cb615b9a7e5be7f764de412" exitCode=0 Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.304577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcqz5" event={"ID":"6c8a922f-a887-401f-9f22-18355a0a81d7","Type":"ContainerDied","Data":"ce043966e2cb9f1f3a66b143f56141b501cea8e88cb615b9a7e5be7f764de412"} Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.306451 4805 generic.go:334] "Generic (PLEG): container finished" podID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerID="f28fbe94b0e8a821c9983c9fecddd2c344386df434a822703386014c70a5e3a1" exitCode=0 Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.306487 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dc7vx" event={"ID":"89930a1c-e5ae-4885-9dfd-f9c10df38c8a","Type":"ContainerDied","Data":"f28fbe94b0e8a821c9983c9fecddd2c344386df434a822703386014c70a5e3a1"} Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.307312 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" event={"ID":"ec1e6ca1-a29e-4572-8326-f4119b22b30a","Type":"ContainerStarted","Data":"cc176e268d525d7fb935da2ebf2bb202bc360591d2b31cc73ff8e6471dc7d906"} Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.309295 4805 generic.go:334] "Generic (PLEG): container finished" podID="7aba55d3-6790-4f26-9663-63bdf0c9991e" containerID="5689b0aa9b836b21a7a2b6423def6bee340a8db7c011c9474da0e99abec5ddc1" exitCode=0 Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.309350 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9ppd" event={"ID":"7aba55d3-6790-4f26-9663-63bdf0c9991e","Type":"ContainerDied","Data":"5689b0aa9b836b21a7a2b6423def6bee340a8db7c011c9474da0e99abec5ddc1"} Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.310494 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sjff7_25bbef6b-4746-41e9-83ef-20e9c54a7451/marketplace-operator/1.log" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.310528 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" event={"ID":"25bbef6b-4746-41e9-83ef-20e9c54a7451","Type":"ContainerStarted","Data":"e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048"} Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.310663 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" containerID="cri-o://e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048" gracePeriod=30 Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.312252 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.314160 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.791239 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mcqz5" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.799091 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dc7vx" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.809380 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.928436 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-catalog-content\") pod \"7aba55d3-6790-4f26-9663-63bdf0c9991e\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.928476 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-utilities\") pod \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.928508 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9t2\" (UniqueName: \"kubernetes.io/projected/6c8a922f-a887-401f-9f22-18355a0a81d7-kube-api-access-lz9t2\") pod \"6c8a922f-a887-401f-9f22-18355a0a81d7\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.928553 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-utilities\") pod \"6c8a922f-a887-401f-9f22-18355a0a81d7\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.928579 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-catalog-content\") pod \"6c8a922f-a887-401f-9f22-18355a0a81d7\" (UID: \"6c8a922f-a887-401f-9f22-18355a0a81d7\") " Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.928604 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbv75\" (UniqueName: \"kubernetes.io/projected/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-kube-api-access-gbv75\") pod \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.928620 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-utilities\") pod \"7aba55d3-6790-4f26-9663-63bdf0c9991e\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.928656 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-catalog-content\") pod \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\" (UID: \"89930a1c-e5ae-4885-9dfd-f9c10df38c8a\") " Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.928696 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffz4k\" (UniqueName: \"kubernetes.io/projected/7aba55d3-6790-4f26-9663-63bdf0c9991e-kube-api-access-ffz4k\") pod \"7aba55d3-6790-4f26-9663-63bdf0c9991e\" (UID: \"7aba55d3-6790-4f26-9663-63bdf0c9991e\") " Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.931697 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-utilities" (OuterVolumeSpecName: "utilities") pod "7aba55d3-6790-4f26-9663-63bdf0c9991e" (UID: "7aba55d3-6790-4f26-9663-63bdf0c9991e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.932359 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-utilities" (OuterVolumeSpecName: "utilities") pod "89930a1c-e5ae-4885-9dfd-f9c10df38c8a" (UID: "89930a1c-e5ae-4885-9dfd-f9c10df38c8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.933098 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-utilities" (OuterVolumeSpecName: "utilities") pod "6c8a922f-a887-401f-9f22-18355a0a81d7" (UID: "6c8a922f-a887-401f-9f22-18355a0a81d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.936015 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c8a922f-a887-401f-9f22-18355a0a81d7-kube-api-access-lz9t2" (OuterVolumeSpecName: "kube-api-access-lz9t2") pod "6c8a922f-a887-401f-9f22-18355a0a81d7" (UID: "6c8a922f-a887-401f-9f22-18355a0a81d7"). InnerVolumeSpecName "kube-api-access-lz9t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:44 crc kubenswrapper[4805]: E1216 12:00:44.937454 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb is running failed: container process not found" containerID="2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.937712 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-kube-api-access-gbv75" (OuterVolumeSpecName: "kube-api-access-gbv75") pod "89930a1c-e5ae-4885-9dfd-f9c10df38c8a" (UID: "89930a1c-e5ae-4885-9dfd-f9c10df38c8a"). InnerVolumeSpecName "kube-api-access-gbv75". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:44 crc kubenswrapper[4805]: E1216 12:00:44.938115 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb is running failed: container process not found" containerID="2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:44 crc kubenswrapper[4805]: E1216 12:00:44.938365 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb is running failed: container process not found" containerID="2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:44 crc kubenswrapper[4805]: E1216 12:00:44.938416 4805 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-l6h6x" podUID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerName="registry-server" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.939300 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aba55d3-6790-4f26-9663-63bdf0c9991e-kube-api-access-ffz4k" (OuterVolumeSpecName: "kube-api-access-ffz4k") pod "7aba55d3-6790-4f26-9663-63bdf0c9991e" (UID: "7aba55d3-6790-4f26-9663-63bdf0c9991e"). InnerVolumeSpecName "kube-api-access-ffz4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:44 crc kubenswrapper[4805]: E1216 12:00:44.985826 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a is running failed: container process not found" containerID="f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:44 crc kubenswrapper[4805]: E1216 12:00:44.987486 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a is running failed: container process not found" containerID="f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:44 crc kubenswrapper[4805]: E1216 12:00:44.988554 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a is running failed: container process not found" containerID="f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:44 crc kubenswrapper[4805]: E1216 12:00:44.988592 4805 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-lhc2l" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerName="registry-server" Dec 16 12:00:44 crc kubenswrapper[4805]: I1216 12:00:44.994409 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7aba55d3-6790-4f26-9663-63bdf0c9991e" (UID: "7aba55d3-6790-4f26-9663-63bdf0c9991e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.021974 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c8a922f-a887-401f-9f22-18355a0a81d7" (UID: "6c8a922f-a887-401f-9f22-18355a0a81d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.030394 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.030423 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8a922f-a887-401f-9f22-18355a0a81d7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.030433 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbv75\" (UniqueName: \"kubernetes.io/projected/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-kube-api-access-gbv75\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.030442 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.030465 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffz4k\" (UniqueName: \"kubernetes.io/projected/7aba55d3-6790-4f26-9663-63bdf0c9991e-kube-api-access-ffz4k\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.030473 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aba55d3-6790-4f26-9663-63bdf0c9991e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.030480 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.030488 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9t2\" (UniqueName: \"kubernetes.io/projected/6c8a922f-a887-401f-9f22-18355a0a81d7-kube-api-access-lz9t2\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.039794 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89930a1c-e5ae-4885-9dfd-f9c10df38c8a" (UID: "89930a1c-e5ae-4885-9dfd-f9c10df38c8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.061968 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342 is running failed: container process not found" containerID="647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.062580 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342 is running failed: container process not found" containerID="647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.062829 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342 is running failed: container process not found" containerID="647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.062865 4805 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-8sb75" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerName="registry-server" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.066531 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612 is running failed: container process not found" containerID="e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.067580 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612 is running failed: container process not found" containerID="e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.067839 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612 is running failed: container process not found" containerID="e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.067894 4805 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-4fcgh" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerName="registry-server" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.081711 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.131845 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89930a1c-e5ae-4885-9dfd-f9c10df38c8a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.211806 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.230718 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.233160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxgff\" (UniqueName: \"kubernetes.io/projected/127838d6-328d-46cb-b942-6aed7bfd5048-kube-api-access-hxgff\") pod \"127838d6-328d-46cb-b942-6aed7bfd5048\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.233265 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-utilities\") pod \"127838d6-328d-46cb-b942-6aed7bfd5048\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.233326 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-catalog-content\") pod \"127838d6-328d-46cb-b942-6aed7bfd5048\" (UID: \"127838d6-328d-46cb-b942-6aed7bfd5048\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.235364 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-utilities" (OuterVolumeSpecName: "utilities") pod "127838d6-328d-46cb-b942-6aed7bfd5048" (UID: "127838d6-328d-46cb-b942-6aed7bfd5048"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.246670 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/127838d6-328d-46cb-b942-6aed7bfd5048-kube-api-access-hxgff" (OuterVolumeSpecName: "kube-api-access-hxgff") pod "127838d6-328d-46cb-b942-6aed7bfd5048" (UID: "127838d6-328d-46cb-b942-6aed7bfd5048"). InnerVolumeSpecName "kube-api-access-hxgff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.272093 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.295960 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.319365 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sjff7_25bbef6b-4746-41e9-83ef-20e9c54a7451/marketplace-operator/1.log" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.319426 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.329913 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcqz5" event={"ID":"6c8a922f-a887-401f-9f22-18355a0a81d7","Type":"ContainerDied","Data":"9cf0b8fcca4045eabdda5eecd4f02cea49fb322fabf71789e71dce83e5b025e1"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.329974 4805 scope.go:117] "RemoveContainer" containerID="ce043966e2cb9f1f3a66b143f56141b501cea8e88cb615b9a7e5be7f764de412" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.330220 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mcqz5" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.334359 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-265tc\" (UniqueName: \"kubernetes.io/projected/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-kube-api-access-265tc\") pod \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.334453 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-utilities\") pod \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.334518 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-catalog-content\") pod \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\" (UID: \"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.334588 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-utilities\") pod \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.334629 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jwnj\" (UniqueName: \"kubernetes.io/projected/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-kube-api-access-5jwnj\") pod \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.334712 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-catalog-content\") pod \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\" (UID: \"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.334964 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.334987 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxgff\" (UniqueName: \"kubernetes.io/projected/127838d6-328d-46cb-b942-6aed7bfd5048-kube-api-access-hxgff\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.341768 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dc7vx" event={"ID":"89930a1c-e5ae-4885-9dfd-f9c10df38c8a","Type":"ContainerDied","Data":"8526c46ae9b03e3abff8b9a130c20354a2742c66d8efbcf2ac3d0d4cf15e4c43"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.341886 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dc7vx" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.341939 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-utilities" (OuterVolumeSpecName: "utilities") pod "1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" (UID: "1f6f2cc5-7e1a-444e-94ea-a2e25c455a73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.350664 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-utilities" (OuterVolumeSpecName: "utilities") pod "4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" (UID: "4e0e2da1-5f69-4d95-b9eb-4b588258d3f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.350809 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "127838d6-328d-46cb-b942-6aed7bfd5048" (UID: "127838d6-328d-46cb-b942-6aed7bfd5048"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.351953 4805 generic.go:334] "Generic (PLEG): container finished" podID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerID="2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb" exitCode=0 Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.352174 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6h6x" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.352699 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6h6x" event={"ID":"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73","Type":"ContainerDied","Data":"2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.352752 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6h6x" event={"ID":"1f6f2cc5-7e1a-444e-94ea-a2e25c455a73","Type":"ContainerDied","Data":"7d974e6f029136d4ff861671e6a8ba83c1dd2171e7e00d7ce628cd1d93229b45"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.353993 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-kube-api-access-5jwnj" (OuterVolumeSpecName: "kube-api-access-5jwnj") pod "1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" (UID: "1f6f2cc5-7e1a-444e-94ea-a2e25c455a73"). InnerVolumeSpecName "kube-api-access-5jwnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.359135 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-kube-api-access-265tc" (OuterVolumeSpecName: "kube-api-access-265tc") pod "4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" (UID: "4e0e2da1-5f69-4d95-b9eb-4b588258d3f0"). InnerVolumeSpecName "kube-api-access-265tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.361347 4805 generic.go:334] "Generic (PLEG): container finished" podID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerID="f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a" exitCode=0 Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.361468 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc2l" event={"ID":"3ed6f24b-f15e-4540-8fd2-1800a086a69e","Type":"ContainerDied","Data":"f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.361532 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc2l" event={"ID":"3ed6f24b-f15e-4540-8fd2-1800a086a69e","Type":"ContainerDied","Data":"bfa72c25e4b7b345f4dfac5a8e626ceee449d323e1c0c1f00aff99e35512404a"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.361669 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhc2l" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.368895 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" event={"ID":"ec1e6ca1-a29e-4572-8326-f4119b22b30a","Type":"ContainerStarted","Data":"11faa32929120a20d19864d1501cdf4436d989012918c9b1da9f4e3c81bd2e93"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.369740 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.375019 4805 scope.go:117] "RemoveContainer" containerID="de7fb3c43e8cc3af6824eb8b8fe1b440af91cfe0d8e7be2f8a9755c637b205ce" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.377280 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.385629 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9ppd" event={"ID":"7aba55d3-6790-4f26-9663-63bdf0c9991e","Type":"ContainerDied","Data":"fd67a44001a7a2cd60cc3492d5028287e6b6f9efea38a792e86058117b770d85"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.385752 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9ppd" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.388294 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" (UID: "1f6f2cc5-7e1a-444e-94ea-a2e25c455a73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.388997 4805 generic.go:334] "Generic (PLEG): container finished" podID="127838d6-328d-46cb-b942-6aed7bfd5048" containerID="a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37" exitCode=0 Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.389054 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc8cv" event={"ID":"127838d6-328d-46cb-b942-6aed7bfd5048","Type":"ContainerDied","Data":"a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.389079 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc8cv" event={"ID":"127838d6-328d-46cb-b942-6aed7bfd5048","Type":"ContainerDied","Data":"f967ff59cb4cb32576be3fe8bf40bb2e15af221b9ac606a157e32094a8ee742e"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.389133 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jc8cv" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.409114 4805 scope.go:117] "RemoveContainer" containerID="e71cb658cbda46ec6fbec7ae8dd6ee7fdcb51b918cfbbdc70c6d31c8e6db22ce" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.409121 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mcqz5"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.427456 4805 generic.go:334] "Generic (PLEG): container finished" podID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerID="e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612" exitCode=0 Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.427589 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fcgh" event={"ID":"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0","Type":"ContainerDied","Data":"e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.427641 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fcgh" event={"ID":"4e0e2da1-5f69-4d95-b9eb-4b588258d3f0","Type":"ContainerDied","Data":"8c7ed40816b68ca7d8107a7328d549c4610f646e82910bf2c6e58104e9f5d9c9"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.427780 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fcgh" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.435535 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sjff7_25bbef6b-4746-41e9-83ef-20e9c54a7451/marketplace-operator/1.log" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.435587 4805 generic.go:334] "Generic (PLEG): container finished" podID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerID="e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048" exitCode=0 Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.435678 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" event={"ID":"25bbef6b-4746-41e9-83ef-20e9c54a7451","Type":"ContainerDied","Data":"e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.435709 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" event={"ID":"25bbef6b-4746-41e9-83ef-20e9c54a7451","Type":"ContainerDied","Data":"c16e1f1d57f0128b31694f27a14baf22fa2385d9235738fb65d5e86917ada879"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.435806 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sjff7" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.440188 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm5p8\" (UniqueName: \"kubernetes.io/projected/cf4f3221-afad-4a1d-8471-8018c2f08ddc-kube-api-access-lm5p8\") pod \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.440486 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-utilities\") pod \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.440521 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-catalog-content\") pod \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.444363 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mcqz5"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.446549 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-utilities" (OuterVolumeSpecName: "utilities") pod "3ed6f24b-f15e-4540-8fd2-1800a086a69e" (UID: "3ed6f24b-f15e-4540-8fd2-1800a086a69e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.443459 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bw8g\" (UniqueName: \"kubernetes.io/projected/3ed6f24b-f15e-4540-8fd2-1800a086a69e-kube-api-access-5bw8g\") pod \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.447277 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-catalog-content\") pod \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\" (UID: \"3ed6f24b-f15e-4540-8fd2-1800a086a69e\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.447314 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-trusted-ca\") pod \"25bbef6b-4746-41e9-83ef-20e9c54a7451\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.447363 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-utilities\") pod \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\" (UID: \"cf4f3221-afad-4a1d-8471-8018c2f08ddc\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.456778 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxpqt\" (UniqueName: \"kubernetes.io/projected/25bbef6b-4746-41e9-83ef-20e9c54a7451-kube-api-access-zxpqt\") pod \"25bbef6b-4746-41e9-83ef-20e9c54a7451\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.456858 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-operator-metrics\") pod \"25bbef6b-4746-41e9-83ef-20e9c54a7451\" (UID: \"25bbef6b-4746-41e9-83ef-20e9c54a7451\") " Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.457521 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.457546 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-265tc\" (UniqueName: \"kubernetes.io/projected/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-kube-api-access-265tc\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.457561 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.457580 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.457593 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.457605 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jwnj\" (UniqueName: \"kubernetes.io/projected/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73-kube-api-access-5jwnj\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.457617 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/127838d6-328d-46cb-b942-6aed7bfd5048-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.451665 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed6f24b-f15e-4540-8fd2-1800a086a69e-kube-api-access-5bw8g" (OuterVolumeSpecName: "kube-api-access-5bw8g") pod "3ed6f24b-f15e-4540-8fd2-1800a086a69e" (UID: "3ed6f24b-f15e-4540-8fd2-1800a086a69e"). InnerVolumeSpecName "kube-api-access-5bw8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.455065 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-utilities" (OuterVolumeSpecName: "utilities") pod "cf4f3221-afad-4a1d-8471-8018c2f08ddc" (UID: "cf4f3221-afad-4a1d-8471-8018c2f08ddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.456356 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "25bbef6b-4746-41e9-83ef-20e9c54a7451" (UID: "25bbef6b-4746-41e9-83ef-20e9c54a7451"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.463828 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "25bbef6b-4746-41e9-83ef-20e9c54a7451" (UID: "25bbef6b-4746-41e9-83ef-20e9c54a7451"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.463889 4805 scope.go:117] "RemoveContainer" containerID="f28fbe94b0e8a821c9983c9fecddd2c344386df434a822703386014c70a5e3a1" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.460053 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4f3221-afad-4a1d-8471-8018c2f08ddc-kube-api-access-lm5p8" (OuterVolumeSpecName: "kube-api-access-lm5p8") pod "cf4f3221-afad-4a1d-8471-8018c2f08ddc" (UID: "cf4f3221-afad-4a1d-8471-8018c2f08ddc"). InnerVolumeSpecName "kube-api-access-lm5p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.464110 4805 generic.go:334] "Generic (PLEG): container finished" podID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerID="647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342" exitCode=0 Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.464248 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sb75" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.464259 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dc7vx"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.464295 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sb75" event={"ID":"cf4f3221-afad-4a1d-8471-8018c2f08ddc","Type":"ContainerDied","Data":"647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.464327 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sb75" event={"ID":"cf4f3221-afad-4a1d-8471-8018c2f08ddc","Type":"ContainerDied","Data":"1982f272959844ac5765564d1ef26faf4b0d983c46ad334c1bb76cc6da17aeff"} Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.469847 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25bbef6b-4746-41e9-83ef-20e9c54a7451-kube-api-access-zxpqt" (OuterVolumeSpecName: "kube-api-access-zxpqt") pod "25bbef6b-4746-41e9-83ef-20e9c54a7451" (UID: "25bbef6b-4746-41e9-83ef-20e9c54a7451"). InnerVolumeSpecName "kube-api-access-zxpqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.483257 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ed6f24b-f15e-4540-8fd2-1800a086a69e" (UID: "3ed6f24b-f15e-4540-8fd2-1800a086a69e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.486607 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dc7vx"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.489400 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g2w5p" podStartSLOduration=2.489034167 podStartE2EDuration="2.489034167s" podCreationTimestamp="2025-12-16 12:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:00:45.442723531 +0000 UTC m=+319.160981346" watchObservedRunningTime="2025-12-16 12:00:45.489034167 +0000 UTC m=+319.207291992" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.489876 4805 scope.go:117] "RemoveContainer" containerID="3c98354b3eee8f15a142afd565555d8604eeb830c9954971c1f98679bc8cb40d" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.512260 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q9ppd"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.514923 4805 scope.go:117] "RemoveContainer" containerID="11ba1bf4f6741d2c308f80d5982e8b7fb76f6efaac16d591404e15a3b698fdf7" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.524429 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q9ppd"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.532220 4805 scope.go:117] "RemoveContainer" containerID="2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.532376 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jc8cv"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.541395 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jc8cv"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.546534 4805 scope.go:117] "RemoveContainer" containerID="52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.551681 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" (UID: "4e0e2da1-5f69-4d95-b9eb-4b588258d3f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.559313 4805 scope.go:117] "RemoveContainer" containerID="3e2c778f14b0bf95f974a634c708be581b6df1fff315bddb1a704e83ca814d86" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.560160 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxpqt\" (UniqueName: \"kubernetes.io/projected/25bbef6b-4746-41e9-83ef-20e9c54a7451-kube-api-access-zxpqt\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.560183 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.560194 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm5p8\" (UniqueName: \"kubernetes.io/projected/cf4f3221-afad-4a1d-8471-8018c2f08ddc-kube-api-access-lm5p8\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.560205 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bw8g\" (UniqueName: \"kubernetes.io/projected/3ed6f24b-f15e-4540-8fd2-1800a086a69e-kube-api-access-5bw8g\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.560217 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ed6f24b-f15e-4540-8fd2-1800a086a69e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.560227 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25bbef6b-4746-41e9-83ef-20e9c54a7451-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.560241 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.560252 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.571882 4805 scope.go:117] "RemoveContainer" containerID="2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.572308 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb\": container with ID starting with 2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb not found: ID does not exist" containerID="2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.572348 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb"} err="failed to get container status \"2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb\": rpc error: code = NotFound desc = could not find container \"2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb\": container with ID starting with 2236d71d28b593d8b07b6a63d77ade059fa500bb592515716068dbd98653ddcb not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.572373 4805 scope.go:117] "RemoveContainer" containerID="52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.572569 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd\": container with ID starting with 52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd not found: ID does not exist" containerID="52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.572594 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd"} err="failed to get container status \"52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd\": rpc error: code = NotFound desc = could not find container \"52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd\": container with ID starting with 52b3452f0c8cd05f5f2a4026f4296e6528c2d65dc9688d60ec4d91c4237f2dcd not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.572611 4805 scope.go:117] "RemoveContainer" containerID="3e2c778f14b0bf95f974a634c708be581b6df1fff315bddb1a704e83ca814d86" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.572869 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e2c778f14b0bf95f974a634c708be581b6df1fff315bddb1a704e83ca814d86\": container with ID starting with 3e2c778f14b0bf95f974a634c708be581b6df1fff315bddb1a704e83ca814d86 not found: ID does not exist" containerID="3e2c778f14b0bf95f974a634c708be581b6df1fff315bddb1a704e83ca814d86" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.572891 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e2c778f14b0bf95f974a634c708be581b6df1fff315bddb1a704e83ca814d86"} err="failed to get container status \"3e2c778f14b0bf95f974a634c708be581b6df1fff315bddb1a704e83ca814d86\": rpc error: code = NotFound desc = could not find container \"3e2c778f14b0bf95f974a634c708be581b6df1fff315bddb1a704e83ca814d86\": container with ID starting with 3e2c778f14b0bf95f974a634c708be581b6df1fff315bddb1a704e83ca814d86 not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.572905 4805 scope.go:117] "RemoveContainer" containerID="f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.585280 4805 scope.go:117] "RemoveContainer" containerID="1f24b460f65b352832ceb1ccd9c0c6b4440ad0297b111900ea916ed53fb48d77" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.599671 4805 scope.go:117] "RemoveContainer" containerID="cc7aac10b7f12d897512be95cb0a2bd7c74a5a39feb42c67502b0efcfa68f2d3" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.616266 4805 scope.go:117] "RemoveContainer" containerID="f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.616960 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a\": container with ID starting with f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a not found: ID does not exist" containerID="f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.617023 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a"} err="failed to get container status \"f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a\": rpc error: code = NotFound desc = could not find container \"f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a\": container with ID starting with f28282d902ec10ee2d06516fe0270cbff203fc31af3d7fdad151ab30b7aeb97a not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.617055 4805 scope.go:117] "RemoveContainer" containerID="1f24b460f65b352832ceb1ccd9c0c6b4440ad0297b111900ea916ed53fb48d77" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.617443 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f24b460f65b352832ceb1ccd9c0c6b4440ad0297b111900ea916ed53fb48d77\": container with ID starting with 1f24b460f65b352832ceb1ccd9c0c6b4440ad0297b111900ea916ed53fb48d77 not found: ID does not exist" containerID="1f24b460f65b352832ceb1ccd9c0c6b4440ad0297b111900ea916ed53fb48d77" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.617471 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f24b460f65b352832ceb1ccd9c0c6b4440ad0297b111900ea916ed53fb48d77"} err="failed to get container status \"1f24b460f65b352832ceb1ccd9c0c6b4440ad0297b111900ea916ed53fb48d77\": rpc error: code = NotFound desc = could not find container \"1f24b460f65b352832ceb1ccd9c0c6b4440ad0297b111900ea916ed53fb48d77\": container with ID starting with 1f24b460f65b352832ceb1ccd9c0c6b4440ad0297b111900ea916ed53fb48d77 not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.617488 4805 scope.go:117] "RemoveContainer" containerID="cc7aac10b7f12d897512be95cb0a2bd7c74a5a39feb42c67502b0efcfa68f2d3" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.617739 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc7aac10b7f12d897512be95cb0a2bd7c74a5a39feb42c67502b0efcfa68f2d3\": container with ID starting with cc7aac10b7f12d897512be95cb0a2bd7c74a5a39feb42c67502b0efcfa68f2d3 not found: ID does not exist" containerID="cc7aac10b7f12d897512be95cb0a2bd7c74a5a39feb42c67502b0efcfa68f2d3" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.617764 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc7aac10b7f12d897512be95cb0a2bd7c74a5a39feb42c67502b0efcfa68f2d3"} err="failed to get container status \"cc7aac10b7f12d897512be95cb0a2bd7c74a5a39feb42c67502b0efcfa68f2d3\": rpc error: code = NotFound desc = could not find container \"cc7aac10b7f12d897512be95cb0a2bd7c74a5a39feb42c67502b0efcfa68f2d3\": container with ID starting with cc7aac10b7f12d897512be95cb0a2bd7c74a5a39feb42c67502b0efcfa68f2d3 not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.617780 4805 scope.go:117] "RemoveContainer" containerID="5689b0aa9b836b21a7a2b6423def6bee340a8db7c011c9474da0e99abec5ddc1" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.629923 4805 scope.go:117] "RemoveContainer" containerID="b636e768f00bcd211e2e548388359ca226e66cb19015998329afd1d8bb4d9e30" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.631267 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf4f3221-afad-4a1d-8471-8018c2f08ddc" (UID: "cf4f3221-afad-4a1d-8471-8018c2f08ddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.645014 4805 scope.go:117] "RemoveContainer" containerID="f6bd65b24ec53b49ef1b666236f492b5f31ed6478ac0946ae39d51ab231e840c" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.661212 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4f3221-afad-4a1d-8471-8018c2f08ddc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.664395 4805 scope.go:117] "RemoveContainer" containerID="a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.720661 4805 scope.go:117] "RemoveContainer" containerID="3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.732576 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6h6x"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.741708 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6h6x"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.746134 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc2l"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.749856 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc2l"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.752305 4805 scope.go:117] "RemoveContainer" containerID="e7e3ba305de4b6b8f5837613db08019c8152481cc499cdde62b33ab666e4be07" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.770341 4805 scope.go:117] "RemoveContainer" containerID="a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.770780 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37\": container with ID starting with a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37 not found: ID does not exist" containerID="a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.770810 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37"} err="failed to get container status \"a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37\": rpc error: code = NotFound desc = could not find container \"a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37\": container with ID starting with a8206cdfa671e35dc1e2e00f5366805706c22a6d79f032fc978a2e267ee56e37 not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.770835 4805 scope.go:117] "RemoveContainer" containerID="3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.771205 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1\": container with ID starting with 3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1 not found: ID does not exist" containerID="3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.771230 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1"} err="failed to get container status \"3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1\": rpc error: code = NotFound desc = could not find container \"3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1\": container with ID starting with 3fc73bd80bb86bed0db7f91481a6fc6b276ef13e461a7f9bd7dd1067576204a1 not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.771260 4805 scope.go:117] "RemoveContainer" containerID="e7e3ba305de4b6b8f5837613db08019c8152481cc499cdde62b33ab666e4be07" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.771520 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e3ba305de4b6b8f5837613db08019c8152481cc499cdde62b33ab666e4be07\": container with ID starting with e7e3ba305de4b6b8f5837613db08019c8152481cc499cdde62b33ab666e4be07 not found: ID does not exist" containerID="e7e3ba305de4b6b8f5837613db08019c8152481cc499cdde62b33ab666e4be07" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.771541 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e3ba305de4b6b8f5837613db08019c8152481cc499cdde62b33ab666e4be07"} err="failed to get container status \"e7e3ba305de4b6b8f5837613db08019c8152481cc499cdde62b33ab666e4be07\": rpc error: code = NotFound desc = could not find container \"e7e3ba305de4b6b8f5837613db08019c8152481cc499cdde62b33ab666e4be07\": container with ID starting with e7e3ba305de4b6b8f5837613db08019c8152481cc499cdde62b33ab666e4be07 not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.771558 4805 scope.go:117] "RemoveContainer" containerID="e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.800191 4805 scope.go:117] "RemoveContainer" containerID="fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.806187 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sjff7"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.813638 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sjff7"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.819726 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fcgh"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.821192 4805 scope.go:117] "RemoveContainer" containerID="5f07ffa9dc4fddcf5a74426322f66df4d1d507553c8fc02b3ed97bf574fab80b" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.825265 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4fcgh"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.826681 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8sb75"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.830515 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8sb75"] Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.837874 4805 scope.go:117] "RemoveContainer" containerID="e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.838339 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612\": container with ID starting with e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612 not found: ID does not exist" containerID="e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.838381 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612"} err="failed to get container status \"e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612\": rpc error: code = NotFound desc = could not find container \"e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612\": container with ID starting with e1103eb2ac73c89d117ddad798fc1313bb77873d4d804c6f43e7b827d85c7612 not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.838421 4805 scope.go:117] "RemoveContainer" containerID="fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.838815 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc\": container with ID starting with fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc not found: ID does not exist" containerID="fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.838848 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc"} err="failed to get container status \"fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc\": rpc error: code = NotFound desc = could not find container \"fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc\": container with ID starting with fbd4c53d1a2790d5062193ef0278881705c2d0ab803859c3d360357974e27ebc not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.838863 4805 scope.go:117] "RemoveContainer" containerID="5f07ffa9dc4fddcf5a74426322f66df4d1d507553c8fc02b3ed97bf574fab80b" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.839133 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f07ffa9dc4fddcf5a74426322f66df4d1d507553c8fc02b3ed97bf574fab80b\": container with ID starting with 5f07ffa9dc4fddcf5a74426322f66df4d1d507553c8fc02b3ed97bf574fab80b not found: ID does not exist" containerID="5f07ffa9dc4fddcf5a74426322f66df4d1d507553c8fc02b3ed97bf574fab80b" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.839191 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f07ffa9dc4fddcf5a74426322f66df4d1d507553c8fc02b3ed97bf574fab80b"} err="failed to get container status \"5f07ffa9dc4fddcf5a74426322f66df4d1d507553c8fc02b3ed97bf574fab80b\": rpc error: code = NotFound desc = could not find container \"5f07ffa9dc4fddcf5a74426322f66df4d1d507553c8fc02b3ed97bf574fab80b\": container with ID starting with 5f07ffa9dc4fddcf5a74426322f66df4d1d507553c8fc02b3ed97bf574fab80b not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.839238 4805 scope.go:117] "RemoveContainer" containerID="e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.851313 4805 scope.go:117] "RemoveContainer" containerID="57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.863052 4805 scope.go:117] "RemoveContainer" containerID="e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.863462 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048\": container with ID starting with e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048 not found: ID does not exist" containerID="e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.863509 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048"} err="failed to get container status \"e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048\": rpc error: code = NotFound desc = could not find container \"e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048\": container with ID starting with e1dd6a9357cc0c1d71278b930bc28ed68f545fca94cf5784e58f794eb2888048 not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.863554 4805 scope.go:117] "RemoveContainer" containerID="57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.863903 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19\": container with ID starting with 57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19 not found: ID does not exist" containerID="57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.863950 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19"} err="failed to get container status \"57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19\": rpc error: code = NotFound desc = could not find container \"57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19\": container with ID starting with 57147888e223e6bd21c4a5d3c8eb82f9aafeb7babf7024db712cbffa29fbec19 not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.863976 4805 scope.go:117] "RemoveContainer" containerID="647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.875521 4805 scope.go:117] "RemoveContainer" containerID="250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.891942 4805 scope.go:117] "RemoveContainer" containerID="75109a9a53e466d1061baf2a8d854a739a6bf6ebb1f897a6d2668a52b65dbf5c" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.906220 4805 scope.go:117] "RemoveContainer" containerID="647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.906560 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342\": container with ID starting with 647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342 not found: ID does not exist" containerID="647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.906591 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342"} err="failed to get container status \"647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342\": rpc error: code = NotFound desc = could not find container \"647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342\": container with ID starting with 647cda0bf59ded3e3f70b6128993b1a1250e6e1cdc02f1a5d17da682310bb342 not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.906614 4805 scope.go:117] "RemoveContainer" containerID="250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.906820 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986\": container with ID starting with 250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986 not found: ID does not exist" containerID="250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.906852 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986"} err="failed to get container status \"250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986\": rpc error: code = NotFound desc = could not find container \"250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986\": container with ID starting with 250935f05598fc3826fc61f47a815aa65b6437bc6cd8a5b18f95472a73adc986 not found: ID does not exist" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.906867 4805 scope.go:117] "RemoveContainer" containerID="75109a9a53e466d1061baf2a8d854a739a6bf6ebb1f897a6d2668a52b65dbf5c" Dec 16 12:00:45 crc kubenswrapper[4805]: E1216 12:00:45.907091 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75109a9a53e466d1061baf2a8d854a739a6bf6ebb1f897a6d2668a52b65dbf5c\": container with ID starting with 75109a9a53e466d1061baf2a8d854a739a6bf6ebb1f897a6d2668a52b65dbf5c not found: ID does not exist" containerID="75109a9a53e466d1061baf2a8d854a739a6bf6ebb1f897a6d2668a52b65dbf5c" Dec 16 12:00:45 crc kubenswrapper[4805]: I1216 12:00:45.907117 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75109a9a53e466d1061baf2a8d854a739a6bf6ebb1f897a6d2668a52b65dbf5c"} err="failed to get container status \"75109a9a53e466d1061baf2a8d854a739a6bf6ebb1f897a6d2668a52b65dbf5c\": rpc error: code = NotFound desc = could not find container \"75109a9a53e466d1061baf2a8d854a739a6bf6ebb1f897a6d2668a52b65dbf5c\": container with ID starting with 75109a9a53e466d1061baf2a8d854a739a6bf6ebb1f897a6d2668a52b65dbf5c not found: ID does not exist" Dec 16 12:00:46 crc kubenswrapper[4805]: I1216 12:00:46.532373 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" path="/var/lib/kubelet/pods/127838d6-328d-46cb-b942-6aed7bfd5048/volumes" Dec 16 12:00:46 crc kubenswrapper[4805]: I1216 12:00:46.534687 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" path="/var/lib/kubelet/pods/1f6f2cc5-7e1a-444e-94ea-a2e25c455a73/volumes" Dec 16 12:00:46 crc kubenswrapper[4805]: I1216 12:00:46.535492 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" path="/var/lib/kubelet/pods/25bbef6b-4746-41e9-83ef-20e9c54a7451/volumes" Dec 16 12:00:46 crc kubenswrapper[4805]: I1216 12:00:46.536475 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" path="/var/lib/kubelet/pods/3ed6f24b-f15e-4540-8fd2-1800a086a69e/volumes" Dec 16 12:00:46 crc kubenswrapper[4805]: I1216 12:00:46.537067 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" path="/var/lib/kubelet/pods/4e0e2da1-5f69-4d95-b9eb-4b588258d3f0/volumes" Dec 16 12:00:46 crc kubenswrapper[4805]: I1216 12:00:46.538355 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" path="/var/lib/kubelet/pods/6c8a922f-a887-401f-9f22-18355a0a81d7/volumes" Dec 16 12:00:46 crc kubenswrapper[4805]: I1216 12:00:46.539016 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" path="/var/lib/kubelet/pods/7aba55d3-6790-4f26-9663-63bdf0c9991e/volumes" Dec 16 12:00:46 crc kubenswrapper[4805]: I1216 12:00:46.539710 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" path="/var/lib/kubelet/pods/89930a1c-e5ae-4885-9dfd-f9c10df38c8a/volumes" Dec 16 12:00:46 crc kubenswrapper[4805]: I1216 12:00:46.540853 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" path="/var/lib/kubelet/pods/cf4f3221-afad-4a1d-8471-8018c2f08ddc/volumes" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.020947 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs"] Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022280 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022298 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022311 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022318 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022330 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022338 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022349 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022357 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022367 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022374 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022385 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022392 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022405 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022412 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022423 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022430 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022438 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022444 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022453 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022460 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022471 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022478 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022487 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022495 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022506 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022513 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022526 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022534 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022555 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022562 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022575 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022582 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022592 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022599 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022607 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022614 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022723 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022761 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022774 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022782 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" containerName="extract-content" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022792 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022799 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022847 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022858 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022868 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022875 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022886 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022893 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022902 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022910 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerName="extract-utilities" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022927 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022934 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: E1216 12:00:51.022942 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.022948 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.023083 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c8a922f-a887-401f-9f22-18355a0a81d7" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.023100 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.023111 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.023118 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="127838d6-328d-46cb-b942-6aed7bfd5048" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.023128 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="89930a1c-e5ae-4885-9dfd-f9c10df38c8a" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.023161 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6f2cc5-7e1a-444e-94ea-a2e25c455a73" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.023174 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed6f24b-f15e-4540-8fd2-1800a086a69e" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.023181 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aba55d3-6790-4f26-9663-63bdf0c9991e" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.023190 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0e2da1-5f69-4d95-b9eb-4b588258d3f0" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.023198 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4f3221-afad-4a1d-8471-8018c2f08ddc" containerName="registry-server" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.023896 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.028451 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs"] Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.031007 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.031248 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.130666 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25299af2-40e6-49f4-ac1c-48c0fde0882c-secret-volume\") pod \"collect-profiles-29431440-vndbs\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.130732 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fwfk\" (UniqueName: \"kubernetes.io/projected/25299af2-40e6-49f4-ac1c-48c0fde0882c-kube-api-access-9fwfk\") pod \"collect-profiles-29431440-vndbs\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.130778 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25299af2-40e6-49f4-ac1c-48c0fde0882c-config-volume\") pod \"collect-profiles-29431440-vndbs\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.232250 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25299af2-40e6-49f4-ac1c-48c0fde0882c-secret-volume\") pod \"collect-profiles-29431440-vndbs\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.232321 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fwfk\" (UniqueName: \"kubernetes.io/projected/25299af2-40e6-49f4-ac1c-48c0fde0882c-kube-api-access-9fwfk\") pod \"collect-profiles-29431440-vndbs\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.232372 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25299af2-40e6-49f4-ac1c-48c0fde0882c-config-volume\") pod \"collect-profiles-29431440-vndbs\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.233867 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25299af2-40e6-49f4-ac1c-48c0fde0882c-config-volume\") pod \"collect-profiles-29431440-vndbs\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.290061 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25299af2-40e6-49f4-ac1c-48c0fde0882c-secret-volume\") pod \"collect-profiles-29431440-vndbs\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.302031 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fwfk\" (UniqueName: \"kubernetes.io/projected/25299af2-40e6-49f4-ac1c-48c0fde0882c-kube-api-access-9fwfk\") pod \"collect-profiles-29431440-vndbs\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.348196 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:51 crc kubenswrapper[4805]: I1216 12:00:51.531303 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs"] Dec 16 12:00:52 crc kubenswrapper[4805]: I1216 12:00:52.528398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" event={"ID":"25299af2-40e6-49f4-ac1c-48c0fde0882c","Type":"ContainerStarted","Data":"956510827d0299d63d623142627ea818c28b22657e5525125448276510873b49"} Dec 16 12:00:53 crc kubenswrapper[4805]: I1216 12:00:53.531514 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" event={"ID":"25299af2-40e6-49f4-ac1c-48c0fde0882c","Type":"ContainerStarted","Data":"56adc8eea5544632c07d921ee409d199225698603e22e1516bae4e8cc57ea445"} Dec 16 12:00:55 crc kubenswrapper[4805]: I1216 12:00:55.544238 4805 generic.go:334] "Generic (PLEG): container finished" podID="25299af2-40e6-49f4-ac1c-48c0fde0882c" containerID="56adc8eea5544632c07d921ee409d199225698603e22e1516bae4e8cc57ea445" exitCode=0 Dec 16 12:00:55 crc kubenswrapper[4805]: I1216 12:00:55.544369 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" event={"ID":"25299af2-40e6-49f4-ac1c-48c0fde0882c","Type":"ContainerDied","Data":"56adc8eea5544632c07d921ee409d199225698603e22e1516bae4e8cc57ea445"} Dec 16 12:00:56 crc kubenswrapper[4805]: I1216 12:00:56.761685 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:00:56 crc kubenswrapper[4805]: I1216 12:00:56.946083 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25299af2-40e6-49f4-ac1c-48c0fde0882c-secret-volume\") pod \"25299af2-40e6-49f4-ac1c-48c0fde0882c\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " Dec 16 12:00:56 crc kubenswrapper[4805]: I1216 12:00:56.946169 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25299af2-40e6-49f4-ac1c-48c0fde0882c-config-volume\") pod \"25299af2-40e6-49f4-ac1c-48c0fde0882c\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " Dec 16 12:00:56 crc kubenswrapper[4805]: I1216 12:00:56.946283 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fwfk\" (UniqueName: \"kubernetes.io/projected/25299af2-40e6-49f4-ac1c-48c0fde0882c-kube-api-access-9fwfk\") pod \"25299af2-40e6-49f4-ac1c-48c0fde0882c\" (UID: \"25299af2-40e6-49f4-ac1c-48c0fde0882c\") " Dec 16 12:00:56 crc kubenswrapper[4805]: I1216 12:00:56.947711 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25299af2-40e6-49f4-ac1c-48c0fde0882c-config-volume" (OuterVolumeSpecName: "config-volume") pod "25299af2-40e6-49f4-ac1c-48c0fde0882c" (UID: "25299af2-40e6-49f4-ac1c-48c0fde0882c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:00:56 crc kubenswrapper[4805]: I1216 12:00:56.965295 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25299af2-40e6-49f4-ac1c-48c0fde0882c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "25299af2-40e6-49f4-ac1c-48c0fde0882c" (UID: "25299af2-40e6-49f4-ac1c-48c0fde0882c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:00:56 crc kubenswrapper[4805]: I1216 12:00:56.973871 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25299af2-40e6-49f4-ac1c-48c0fde0882c-kube-api-access-9fwfk" (OuterVolumeSpecName: "kube-api-access-9fwfk") pod "25299af2-40e6-49f4-ac1c-48c0fde0882c" (UID: "25299af2-40e6-49f4-ac1c-48c0fde0882c"). InnerVolumeSpecName "kube-api-access-9fwfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:00:57 crc kubenswrapper[4805]: I1216 12:00:57.047775 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25299af2-40e6-49f4-ac1c-48c0fde0882c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:57 crc kubenswrapper[4805]: I1216 12:00:57.047819 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25299af2-40e6-49f4-ac1c-48c0fde0882c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:57 crc kubenswrapper[4805]: I1216 12:00:57.047833 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fwfk\" (UniqueName: \"kubernetes.io/projected/25299af2-40e6-49f4-ac1c-48c0fde0882c-kube-api-access-9fwfk\") on node \"crc\" DevicePath \"\"" Dec 16 12:00:57 crc kubenswrapper[4805]: I1216 12:00:57.557196 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" event={"ID":"25299af2-40e6-49f4-ac1c-48c0fde0882c","Type":"ContainerDied","Data":"956510827d0299d63d623142627ea818c28b22657e5525125448276510873b49"} Dec 16 12:00:57 crc kubenswrapper[4805]: I1216 12:00:57.557235 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956510827d0299d63d623142627ea818c28b22657e5525125448276510873b49" Dec 16 12:00:57 crc kubenswrapper[4805]: I1216 12:00:57.557259 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs" Dec 16 12:01:18 crc kubenswrapper[4805]: I1216 12:01:18.163874 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nq4vq"] Dec 16 12:01:18 crc kubenswrapper[4805]: I1216 12:01:18.164568 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" podUID="7f262dc5-9bae-450c-ab81-3172ba82e700" containerName="controller-manager" containerID="cri-o://cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69" gracePeriod=30 Dec 16 12:01:18 crc kubenswrapper[4805]: I1216 12:01:18.248831 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv"] Dec 16 12:01:18 crc kubenswrapper[4805]: I1216 12:01:18.249120 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" podUID="58ff610a-1f7d-4e29-83c2-a95691a7dbc3" containerName="route-controller-manager" containerID="cri-o://ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d" gracePeriod=30 Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.308468 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.370004 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.386453 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-serving-cert\") pod \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.386490 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-config\") pod \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.386547 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brmjb\" (UniqueName: \"kubernetes.io/projected/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-kube-api-access-brmjb\") pod \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.386567 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t7bj\" (UniqueName: \"kubernetes.io/projected/7f262dc5-9bae-450c-ab81-3172ba82e700-kube-api-access-7t7bj\") pod \"7f262dc5-9bae-450c-ab81-3172ba82e700\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.386583 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-client-ca\") pod \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\" (UID: \"58ff610a-1f7d-4e29-83c2-a95691a7dbc3\") " Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.386608 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-config\") pod \"7f262dc5-9bae-450c-ab81-3172ba82e700\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.386627 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-proxy-ca-bundles\") pod \"7f262dc5-9bae-450c-ab81-3172ba82e700\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.386652 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-client-ca\") pod \"7f262dc5-9bae-450c-ab81-3172ba82e700\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.386674 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f262dc5-9bae-450c-ab81-3172ba82e700-serving-cert\") pod \"7f262dc5-9bae-450c-ab81-3172ba82e700\" (UID: \"7f262dc5-9bae-450c-ab81-3172ba82e700\") " Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.388006 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-client-ca" (OuterVolumeSpecName: "client-ca") pod "58ff610a-1f7d-4e29-83c2-a95691a7dbc3" (UID: "58ff610a-1f7d-4e29-83c2-a95691a7dbc3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.388869 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f262dc5-9bae-450c-ab81-3172ba82e700" (UID: "7f262dc5-9bae-450c-ab81-3172ba82e700"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.388891 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-config" (OuterVolumeSpecName: "config") pod "7f262dc5-9bae-450c-ab81-3172ba82e700" (UID: "7f262dc5-9bae-450c-ab81-3172ba82e700"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.388882 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7f262dc5-9bae-450c-ab81-3172ba82e700" (UID: "7f262dc5-9bae-450c-ab81-3172ba82e700"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.390745 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-config" (OuterVolumeSpecName: "config") pod "58ff610a-1f7d-4e29-83c2-a95691a7dbc3" (UID: "58ff610a-1f7d-4e29-83c2-a95691a7dbc3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.401279 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-kube-api-access-brmjb" (OuterVolumeSpecName: "kube-api-access-brmjb") pod "58ff610a-1f7d-4e29-83c2-a95691a7dbc3" (UID: "58ff610a-1f7d-4e29-83c2-a95691a7dbc3"). InnerVolumeSpecName "kube-api-access-brmjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.401822 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "58ff610a-1f7d-4e29-83c2-a95691a7dbc3" (UID: "58ff610a-1f7d-4e29-83c2-a95691a7dbc3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.408309 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f262dc5-9bae-450c-ab81-3172ba82e700-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f262dc5-9bae-450c-ab81-3172ba82e700" (UID: "7f262dc5-9bae-450c-ab81-3172ba82e700"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.408409 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f262dc5-9bae-450c-ab81-3172ba82e700-kube-api-access-7t7bj" (OuterVolumeSpecName: "kube-api-access-7t7bj") pod "7f262dc5-9bae-450c-ab81-3172ba82e700" (UID: "7f262dc5-9bae-450c-ab81-3172ba82e700"). InnerVolumeSpecName "kube-api-access-7t7bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.491370 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t7bj\" (UniqueName: \"kubernetes.io/projected/7f262dc5-9bae-450c-ab81-3172ba82e700-kube-api-access-7t7bj\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.491437 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.491451 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.491463 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.491474 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f262dc5-9bae-450c-ab81-3172ba82e700-client-ca\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.491486 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f262dc5-9bae-450c-ab81-3172ba82e700-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.491496 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.491506 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.491519 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brmjb\" (UniqueName: \"kubernetes.io/projected/58ff610a-1f7d-4e29-83c2-a95691a7dbc3-kube-api-access-brmjb\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.669472 4805 generic.go:334] "Generic (PLEG): container finished" podID="7f262dc5-9bae-450c-ab81-3172ba82e700" containerID="cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69" exitCode=0 Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.669534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" event={"ID":"7f262dc5-9bae-450c-ab81-3172ba82e700","Type":"ContainerDied","Data":"cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69"} Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.669567 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" event={"ID":"7f262dc5-9bae-450c-ab81-3172ba82e700","Type":"ContainerDied","Data":"6464d92d67da9311ec9d5dafb3a6de00752b7bbcd0f5ba9225a079d10b993d5b"} Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.669586 4805 scope.go:117] "RemoveContainer" containerID="cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.669690 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nq4vq" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.672661 4805 generic.go:334] "Generic (PLEG): container finished" podID="58ff610a-1f7d-4e29-83c2-a95691a7dbc3" containerID="ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d" exitCode=0 Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.672708 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" event={"ID":"58ff610a-1f7d-4e29-83c2-a95691a7dbc3","Type":"ContainerDied","Data":"ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d"} Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.672729 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.672746 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv" event={"ID":"58ff610a-1f7d-4e29-83c2-a95691a7dbc3","Type":"ContainerDied","Data":"379982bb8bc65714fa0d40557628f315c5deda194c5366c17989d96647236cfe"} Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.696270 4805 scope.go:117] "RemoveContainer" containerID="cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69" Dec 16 12:01:19 crc kubenswrapper[4805]: E1216 12:01:19.698091 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69\": container with ID starting with cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69 not found: ID does not exist" containerID="cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.698136 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69"} err="failed to get container status \"cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69\": rpc error: code = NotFound desc = could not find container \"cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69\": container with ID starting with cf13e8e0dbbe2add8b9107202c98b5621c21fdf149192fbe51c81572eb1bff69 not found: ID does not exist" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.698173 4805 scope.go:117] "RemoveContainer" containerID="ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.711561 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nq4vq"] Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.716022 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nq4vq"] Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.726824 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv"] Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.729878 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jt4tv"] Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.732233 4805 scope.go:117] "RemoveContainer" containerID="ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d" Dec 16 12:01:19 crc kubenswrapper[4805]: E1216 12:01:19.732678 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d\": container with ID starting with ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d not found: ID does not exist" containerID="ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.732728 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d"} err="failed to get container status \"ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d\": rpc error: code = NotFound desc = could not find container \"ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d\": container with ID starting with ddf3d6db7979c1acb1f6993322fc67c9e1fe014433e7378774eed2b20ad3c20d not found: ID does not exist" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.848254 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk"] Dec 16 12:01:19 crc kubenswrapper[4805]: E1216 12:01:19.848571 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f262dc5-9bae-450c-ab81-3172ba82e700" containerName="controller-manager" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.848596 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f262dc5-9bae-450c-ab81-3172ba82e700" containerName="controller-manager" Dec 16 12:01:19 crc kubenswrapper[4805]: E1216 12:01:19.848607 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ff610a-1f7d-4e29-83c2-a95691a7dbc3" containerName="route-controller-manager" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.848618 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ff610a-1f7d-4e29-83c2-a95691a7dbc3" containerName="route-controller-manager" Dec 16 12:01:19 crc kubenswrapper[4805]: E1216 12:01:19.848634 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25299af2-40e6-49f4-ac1c-48c0fde0882c" containerName="collect-profiles" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.848644 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="25299af2-40e6-49f4-ac1c-48c0fde0882c" containerName="collect-profiles" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.848782 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ff610a-1f7d-4e29-83c2-a95691a7dbc3" containerName="route-controller-manager" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.848800 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f262dc5-9bae-450c-ab81-3172ba82e700" containerName="controller-manager" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.848811 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="25299af2-40e6-49f4-ac1c-48c0fde0882c" containerName="collect-profiles" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.848827 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bbef6b-4746-41e9-83ef-20e9c54a7451" containerName="marketplace-operator" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.849394 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.851517 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84656bf6cf-5lhcz"] Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.852028 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.852484 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.852721 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.852881 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.853252 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.853322 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.855519 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.858355 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.858486 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.858687 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.858945 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.859010 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.859187 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.866469 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.871345 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84656bf6cf-5lhcz"] Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.875715 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk"] Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.997036 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw8ls\" (UniqueName: \"kubernetes.io/projected/c41af98c-2818-465e-9b72-238ae35ae902-kube-api-access-xw8ls\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.997104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-proxy-ca-bundles\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.997256 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-config\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.997354 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c41af98c-2818-465e-9b72-238ae35ae902-serving-cert\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.997406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-config\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.997454 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-client-ca\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.997521 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-serving-cert\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.997561 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w6ts\" (UniqueName: \"kubernetes.io/projected/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-kube-api-access-4w6ts\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:19 crc kubenswrapper[4805]: I1216 12:01:19.997596 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-client-ca\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.098962 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-serving-cert\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.099070 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w6ts\" (UniqueName: \"kubernetes.io/projected/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-kube-api-access-4w6ts\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.099104 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-client-ca\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.099163 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw8ls\" (UniqueName: \"kubernetes.io/projected/c41af98c-2818-465e-9b72-238ae35ae902-kube-api-access-xw8ls\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.099186 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-proxy-ca-bundles\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.099245 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-config\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.099269 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c41af98c-2818-465e-9b72-238ae35ae902-serving-cert\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.099304 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-config\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.099330 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-client-ca\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.100475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-client-ca\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.100489 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-proxy-ca-bundles\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.100807 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-client-ca\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.101446 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-config\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.101635 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-config\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.108904 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-serving-cert\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.111892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c41af98c-2818-465e-9b72-238ae35ae902-serving-cert\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.116312 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw8ls\" (UniqueName: \"kubernetes.io/projected/c41af98c-2818-465e-9b72-238ae35ae902-kube-api-access-xw8ls\") pod \"route-controller-manager-5ccfb6bb66-c47jk\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.116366 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w6ts\" (UniqueName: \"kubernetes.io/projected/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-kube-api-access-4w6ts\") pod \"controller-manager-84656bf6cf-5lhcz\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.168418 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.175774 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.403461 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk"] Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.531011 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ff610a-1f7d-4e29-83c2-a95691a7dbc3" path="/var/lib/kubelet/pods/58ff610a-1f7d-4e29-83c2-a95691a7dbc3/volumes" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.532613 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f262dc5-9bae-450c-ab81-3172ba82e700" path="/var/lib/kubelet/pods/7f262dc5-9bae-450c-ab81-3172ba82e700/volumes" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.664901 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84656bf6cf-5lhcz"] Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.680567 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" event={"ID":"c41af98c-2818-465e-9b72-238ae35ae902","Type":"ContainerStarted","Data":"84dff506d85c9cf058c0e4fb9549cf5b7e3aee1c945ea10cc189238cf8fdf9b9"} Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.680772 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.680863 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" event={"ID":"c41af98c-2818-465e-9b72-238ae35ae902","Type":"ContainerStarted","Data":"f1b332009e911854e49d46e995fbc18aed30dc9cd8848f28b70b588de3c743b4"} Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.682296 4805 patch_prober.go:28] interesting pod/route-controller-manager-5ccfb6bb66-c47jk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.682352 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" podUID="c41af98c-2818-465e-9b72-238ae35ae902" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.682484 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" event={"ID":"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2","Type":"ContainerStarted","Data":"9dc38a186a28af10ac9c0054fc4a9970fb64ff3694fc17b0f8d45171f57fc9cb"} Dec 16 12:01:20 crc kubenswrapper[4805]: I1216 12:01:20.699066 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" podStartSLOduration=2.699047165 podStartE2EDuration="2.699047165s" podCreationTimestamp="2025-12-16 12:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:01:20.697866691 +0000 UTC m=+354.416124496" watchObservedRunningTime="2025-12-16 12:01:20.699047165 +0000 UTC m=+354.417304980" Dec 16 12:01:21 crc kubenswrapper[4805]: I1216 12:01:21.692564 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" event={"ID":"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2","Type":"ContainerStarted","Data":"0bab6011f910b446d040de39b6fbfb78c67c81497045a9b246f4357d75715f3c"} Dec 16 12:01:21 crc kubenswrapper[4805]: I1216 12:01:21.697255 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:01:21 crc kubenswrapper[4805]: I1216 12:01:21.712718 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" podStartSLOduration=3.71269954 podStartE2EDuration="3.71269954s" podCreationTimestamp="2025-12-16 12:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:01:21.712452773 +0000 UTC m=+355.430710588" watchObservedRunningTime="2025-12-16 12:01:21.71269954 +0000 UTC m=+355.430957355" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.081758 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ptrmm"] Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.082886 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.084895 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.094216 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptrmm"] Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.128478 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-utilities\") pod \"redhat-operators-ptrmm\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.128849 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-catalog-content\") pod \"redhat-operators-ptrmm\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.128964 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp6hw\" (UniqueName: \"kubernetes.io/projected/2eb513c8-933e-42ca-8720-a0fed194ea8d-kube-api-access-kp6hw\") pod \"redhat-operators-ptrmm\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.230035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp6hw\" (UniqueName: \"kubernetes.io/projected/2eb513c8-933e-42ca-8720-a0fed194ea8d-kube-api-access-kp6hw\") pod \"redhat-operators-ptrmm\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.230432 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-utilities\") pod \"redhat-operators-ptrmm\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.230789 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-catalog-content\") pod \"redhat-operators-ptrmm\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.230964 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-utilities\") pod \"redhat-operators-ptrmm\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.231326 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-catalog-content\") pod \"redhat-operators-ptrmm\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.253679 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp6hw\" (UniqueName: \"kubernetes.io/projected/2eb513c8-933e-42ca-8720-a0fed194ea8d-kube-api-access-kp6hw\") pod \"redhat-operators-ptrmm\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.280665 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7qdvb"] Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.281664 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.284843 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.297376 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7qdvb"] Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.332020 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mft5r\" (UniqueName: \"kubernetes.io/projected/5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1-kube-api-access-mft5r\") pod \"redhat-marketplace-7qdvb\" (UID: \"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1\") " pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.332075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1-catalog-content\") pod \"redhat-marketplace-7qdvb\" (UID: \"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1\") " pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.332198 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1-utilities\") pod \"redhat-marketplace-7qdvb\" (UID: \"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1\") " pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.399051 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.434908 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1-utilities\") pod \"redhat-marketplace-7qdvb\" (UID: \"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1\") " pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.435184 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mft5r\" (UniqueName: \"kubernetes.io/projected/5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1-kube-api-access-mft5r\") pod \"redhat-marketplace-7qdvb\" (UID: \"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1\") " pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.435322 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1-catalog-content\") pod \"redhat-marketplace-7qdvb\" (UID: \"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1\") " pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.435584 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1-utilities\") pod \"redhat-marketplace-7qdvb\" (UID: \"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1\") " pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.435658 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1-catalog-content\") pod \"redhat-marketplace-7qdvb\" (UID: \"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1\") " pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.454543 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mft5r\" (UniqueName: \"kubernetes.io/projected/5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1-kube-api-access-mft5r\") pod \"redhat-marketplace-7qdvb\" (UID: \"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1\") " pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.603430 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.633748 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptrmm"] Dec 16 12:01:22 crc kubenswrapper[4805]: W1216 12:01:22.650106 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eb513c8_933e_42ca_8720_a0fed194ea8d.slice/crio-9a5ad0bfc3e6ccc5603135ebd3354dee7761bee161c9b2a2ae24a144869277d6 WatchSource:0}: Error finding container 9a5ad0bfc3e6ccc5603135ebd3354dee7761bee161c9b2a2ae24a144869277d6: Status 404 returned error can't find the container with id 9a5ad0bfc3e6ccc5603135ebd3354dee7761bee161c9b2a2ae24a144869277d6 Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.699321 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptrmm" event={"ID":"2eb513c8-933e-42ca-8720-a0fed194ea8d","Type":"ContainerStarted","Data":"9a5ad0bfc3e6ccc5603135ebd3354dee7761bee161c9b2a2ae24a144869277d6"} Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.699754 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.705041 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:22 crc kubenswrapper[4805]: I1216 12:01:22.814010 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7qdvb"] Dec 16 12:01:22 crc kubenswrapper[4805]: W1216 12:01:22.820999 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c69a7ab_dcf6_4c11_b1e4_faf7a390ebb1.slice/crio-dfe1aa3269c8865cc83f7874832bb849c4a32fd60483ee67d19c58fa8b4646f3 WatchSource:0}: Error finding container dfe1aa3269c8865cc83f7874832bb849c4a32fd60483ee67d19c58fa8b4646f3: Status 404 returned error can't find the container with id dfe1aa3269c8865cc83f7874832bb849c4a32fd60483ee67d19c58fa8b4646f3 Dec 16 12:01:23 crc kubenswrapper[4805]: I1216 12:01:23.707408 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1" containerID="2cfcd867f33f3095268946bee18df37fa11ff123655d3b9085d0a5ecc4555bd4" exitCode=0 Dec 16 12:01:23 crc kubenswrapper[4805]: I1216 12:01:23.707469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7qdvb" event={"ID":"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1","Type":"ContainerDied","Data":"2cfcd867f33f3095268946bee18df37fa11ff123655d3b9085d0a5ecc4555bd4"} Dec 16 12:01:23 crc kubenswrapper[4805]: I1216 12:01:23.707783 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7qdvb" event={"ID":"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1","Type":"ContainerStarted","Data":"dfe1aa3269c8865cc83f7874832bb849c4a32fd60483ee67d19c58fa8b4646f3"} Dec 16 12:01:23 crc kubenswrapper[4805]: I1216 12:01:23.709243 4805 generic.go:334] "Generic (PLEG): container finished" podID="2eb513c8-933e-42ca-8720-a0fed194ea8d" containerID="cc651b8ff5f43a5eea71ca844e005b9b4076dcb61790f48105d7f19be3aee6bc" exitCode=0 Dec 16 12:01:23 crc kubenswrapper[4805]: I1216 12:01:23.709278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptrmm" event={"ID":"2eb513c8-933e-42ca-8720-a0fed194ea8d","Type":"ContainerDied","Data":"cc651b8ff5f43a5eea71ca844e005b9b4076dcb61790f48105d7f19be3aee6bc"} Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.287065 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f5wt9"] Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.288436 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.290366 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.297678 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5wt9"] Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.467751 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngwkm\" (UniqueName: \"kubernetes.io/projected/41271b36-917c-4c75-b884-eacf365001cf-kube-api-access-ngwkm\") pod \"certified-operators-f5wt9\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.468121 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-utilities\") pod \"certified-operators-f5wt9\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.468298 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-catalog-content\") pod \"certified-operators-f5wt9\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.569405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-catalog-content\") pod \"certified-operators-f5wt9\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.569510 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngwkm\" (UniqueName: \"kubernetes.io/projected/41271b36-917c-4c75-b884-eacf365001cf-kube-api-access-ngwkm\") pod \"certified-operators-f5wt9\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.569562 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-utilities\") pod \"certified-operators-f5wt9\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.569970 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-catalog-content\") pod \"certified-operators-f5wt9\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.569986 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-utilities\") pod \"certified-operators-f5wt9\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.591305 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngwkm\" (UniqueName: \"kubernetes.io/projected/41271b36-917c-4c75-b884-eacf365001cf-kube-api-access-ngwkm\") pod \"certified-operators-f5wt9\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.610640 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.878704 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5wt9"] Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.888265 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52zv4"] Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.890960 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.900431 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 16 12:01:24 crc kubenswrapper[4805]: I1216 12:01:24.912504 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52zv4"] Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.076446 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-utilities\") pod \"community-operators-52zv4\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.076991 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcp6s\" (UniqueName: \"kubernetes.io/projected/2dab2e93-e0fa-4714-8c47-5b09df4633c5-kube-api-access-xcp6s\") pod \"community-operators-52zv4\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.077090 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-catalog-content\") pod \"community-operators-52zv4\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.178932 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-utilities\") pod \"community-operators-52zv4\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.178987 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcp6s\" (UniqueName: \"kubernetes.io/projected/2dab2e93-e0fa-4714-8c47-5b09df4633c5-kube-api-access-xcp6s\") pod \"community-operators-52zv4\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.179019 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-catalog-content\") pod \"community-operators-52zv4\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.179432 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-utilities\") pod \"community-operators-52zv4\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.179771 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-catalog-content\") pod \"community-operators-52zv4\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.200616 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcp6s\" (UniqueName: \"kubernetes.io/projected/2dab2e93-e0fa-4714-8c47-5b09df4633c5-kube-api-access-xcp6s\") pod \"community-operators-52zv4\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.216548 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.639348 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52zv4"] Dec 16 12:01:25 crc kubenswrapper[4805]: W1216 12:01:25.645770 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dab2e93_e0fa_4714_8c47_5b09df4633c5.slice/crio-2e75398637cc09d4cfffa7ee161c1ee9c91f7f0a2327ca644485ca145d17adec WatchSource:0}: Error finding container 2e75398637cc09d4cfffa7ee161c1ee9c91f7f0a2327ca644485ca145d17adec: Status 404 returned error can't find the container with id 2e75398637cc09d4cfffa7ee161c1ee9c91f7f0a2327ca644485ca145d17adec Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.726001 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52zv4" event={"ID":"2dab2e93-e0fa-4714-8c47-5b09df4633c5","Type":"ContainerStarted","Data":"2e75398637cc09d4cfffa7ee161c1ee9c91f7f0a2327ca644485ca145d17adec"} Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.729059 4805 generic.go:334] "Generic (PLEG): container finished" podID="41271b36-917c-4c75-b884-eacf365001cf" containerID="dad0fb75d64b3a6aa01a07c454aef9f2e3dca4cb731462e0ea592942ebda982c" exitCode=0 Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.729119 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5wt9" event={"ID":"41271b36-917c-4c75-b884-eacf365001cf","Type":"ContainerDied","Data":"dad0fb75d64b3a6aa01a07c454aef9f2e3dca4cb731462e0ea592942ebda982c"} Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.729163 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5wt9" event={"ID":"41271b36-917c-4c75-b884-eacf365001cf","Type":"ContainerStarted","Data":"c9b8fd790d54b8b39f0f6ec63024f82b15d2813a66f612036ade29f0950046e3"} Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.732194 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1" containerID="b34fbb3fc031e30cf53abdae5897b1d7397d3ee3f7c57b0037e859bb0fa33cde" exitCode=0 Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.732263 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7qdvb" event={"ID":"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1","Type":"ContainerDied","Data":"b34fbb3fc031e30cf53abdae5897b1d7397d3ee3f7c57b0037e859bb0fa33cde"} Dec 16 12:01:25 crc kubenswrapper[4805]: I1216 12:01:25.739225 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptrmm" event={"ID":"2eb513c8-933e-42ca-8720-a0fed194ea8d","Type":"ContainerStarted","Data":"ec9d80fb6101bdc4055e8ef062adc517cdcc2f78f765143753e4d3e6d7658ecf"} Dec 16 12:01:26 crc kubenswrapper[4805]: I1216 12:01:26.745296 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" containerID="693b6183565dd12ab504e64c5a43a68342a700a3876e208ca54ccd140a507de7" exitCode=0 Dec 16 12:01:26 crc kubenswrapper[4805]: I1216 12:01:26.745398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52zv4" event={"ID":"2dab2e93-e0fa-4714-8c47-5b09df4633c5","Type":"ContainerDied","Data":"693b6183565dd12ab504e64c5a43a68342a700a3876e208ca54ccd140a507de7"} Dec 16 12:01:26 crc kubenswrapper[4805]: I1216 12:01:26.752425 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5wt9" event={"ID":"41271b36-917c-4c75-b884-eacf365001cf","Type":"ContainerStarted","Data":"c3287245f0a97af9ad430c489aab36f8b79008d5f328e8361e83ae6d40a4e208"} Dec 16 12:01:26 crc kubenswrapper[4805]: I1216 12:01:26.758956 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7qdvb" event={"ID":"5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1","Type":"ContainerStarted","Data":"dc9a418be6fa2dca30dba411fa66f1c458114ff514b7d3c5ce40866c4ca46598"} Dec 16 12:01:26 crc kubenswrapper[4805]: I1216 12:01:26.774386 4805 generic.go:334] "Generic (PLEG): container finished" podID="2eb513c8-933e-42ca-8720-a0fed194ea8d" containerID="ec9d80fb6101bdc4055e8ef062adc517cdcc2f78f765143753e4d3e6d7658ecf" exitCode=0 Dec 16 12:01:26 crc kubenswrapper[4805]: I1216 12:01:26.774450 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptrmm" event={"ID":"2eb513c8-933e-42ca-8720-a0fed194ea8d","Type":"ContainerDied","Data":"ec9d80fb6101bdc4055e8ef062adc517cdcc2f78f765143753e4d3e6d7658ecf"} Dec 16 12:01:26 crc kubenswrapper[4805]: I1216 12:01:26.794689 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7qdvb" podStartSLOduration=2.272105511 podStartE2EDuration="4.794672311s" podCreationTimestamp="2025-12-16 12:01:22 +0000 UTC" firstStartedPulling="2025-12-16 12:01:23.71048819 +0000 UTC m=+357.428745995" lastFinishedPulling="2025-12-16 12:01:26.23305499 +0000 UTC m=+359.951312795" observedRunningTime="2025-12-16 12:01:26.790774037 +0000 UTC m=+360.509031842" watchObservedRunningTime="2025-12-16 12:01:26.794672311 +0000 UTC m=+360.512930126" Dec 16 12:01:27 crc kubenswrapper[4805]: I1216 12:01:27.071846 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:01:27 crc kubenswrapper[4805]: I1216 12:01:27.071918 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:01:27 crc kubenswrapper[4805]: I1216 12:01:27.794365 4805 generic.go:334] "Generic (PLEG): container finished" podID="41271b36-917c-4c75-b884-eacf365001cf" containerID="c3287245f0a97af9ad430c489aab36f8b79008d5f328e8361e83ae6d40a4e208" exitCode=0 Dec 16 12:01:27 crc kubenswrapper[4805]: I1216 12:01:27.794439 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5wt9" event={"ID":"41271b36-917c-4c75-b884-eacf365001cf","Type":"ContainerDied","Data":"c3287245f0a97af9ad430c489aab36f8b79008d5f328e8361e83ae6d40a4e208"} Dec 16 12:01:28 crc kubenswrapper[4805]: I1216 12:01:28.802079 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptrmm" event={"ID":"2eb513c8-933e-42ca-8720-a0fed194ea8d","Type":"ContainerStarted","Data":"034024ebf9a60730f6054a3a2ac454aebda17d5c6fe30d91ff1446528b587c63"} Dec 16 12:01:28 crc kubenswrapper[4805]: I1216 12:01:28.808580 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" containerID="5d50e1cbdb765973b860e42cc8024012c6b442b3fa2910758da12ea9151ffad2" exitCode=0 Dec 16 12:01:28 crc kubenswrapper[4805]: I1216 12:01:28.808627 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52zv4" event={"ID":"2dab2e93-e0fa-4714-8c47-5b09df4633c5","Type":"ContainerDied","Data":"5d50e1cbdb765973b860e42cc8024012c6b442b3fa2910758da12ea9151ffad2"} Dec 16 12:01:28 crc kubenswrapper[4805]: I1216 12:01:28.822955 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ptrmm" podStartSLOduration=2.875033705 podStartE2EDuration="6.822884746s" podCreationTimestamp="2025-12-16 12:01:22 +0000 UTC" firstStartedPulling="2025-12-16 12:01:23.710899972 +0000 UTC m=+357.429157777" lastFinishedPulling="2025-12-16 12:01:27.658751013 +0000 UTC m=+361.377008818" observedRunningTime="2025-12-16 12:01:28.82198438 +0000 UTC m=+362.540242175" watchObservedRunningTime="2025-12-16 12:01:28.822884746 +0000 UTC m=+362.541142561" Dec 16 12:01:29 crc kubenswrapper[4805]: I1216 12:01:29.815957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5wt9" event={"ID":"41271b36-917c-4c75-b884-eacf365001cf","Type":"ContainerStarted","Data":"0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb"} Dec 16 12:01:29 crc kubenswrapper[4805]: I1216 12:01:29.849706 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f5wt9" podStartSLOduration=2.842076511 podStartE2EDuration="5.849666203s" podCreationTimestamp="2025-12-16 12:01:24 +0000 UTC" firstStartedPulling="2025-12-16 12:01:25.730573519 +0000 UTC m=+359.448831324" lastFinishedPulling="2025-12-16 12:01:28.738163211 +0000 UTC m=+362.456421016" observedRunningTime="2025-12-16 12:01:29.843250697 +0000 UTC m=+363.561508512" watchObservedRunningTime="2025-12-16 12:01:29.849666203 +0000 UTC m=+363.567924028" Dec 16 12:01:30 crc kubenswrapper[4805]: I1216 12:01:30.822125 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52zv4" event={"ID":"2dab2e93-e0fa-4714-8c47-5b09df4633c5","Type":"ContainerStarted","Data":"4c5727932d6219cc0e43d01f809bbe4c70d98be622829bad8eb53568d19bc5cc"} Dec 16 12:01:30 crc kubenswrapper[4805]: I1216 12:01:30.838205 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52zv4" podStartSLOduration=3.740896063 podStartE2EDuration="6.838185236s" podCreationTimestamp="2025-12-16 12:01:24 +0000 UTC" firstStartedPulling="2025-12-16 12:01:26.747639922 +0000 UTC m=+360.465897727" lastFinishedPulling="2025-12-16 12:01:29.844929095 +0000 UTC m=+363.563186900" observedRunningTime="2025-12-16 12:01:30.837438784 +0000 UTC m=+364.555696589" watchObservedRunningTime="2025-12-16 12:01:30.838185236 +0000 UTC m=+364.556443051" Dec 16 12:01:32 crc kubenswrapper[4805]: I1216 12:01:32.399625 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:32 crc kubenswrapper[4805]: I1216 12:01:32.400007 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:32 crc kubenswrapper[4805]: I1216 12:01:32.604093 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:32 crc kubenswrapper[4805]: I1216 12:01:32.604170 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:32 crc kubenswrapper[4805]: I1216 12:01:32.639616 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:32 crc kubenswrapper[4805]: I1216 12:01:32.871773 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7qdvb" Dec 16 12:01:33 crc kubenswrapper[4805]: I1216 12:01:33.441043 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ptrmm" podUID="2eb513c8-933e-42ca-8720-a0fed194ea8d" containerName="registry-server" probeResult="failure" output=< Dec 16 12:01:33 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 12:01:33 crc kubenswrapper[4805]: > Dec 16 12:01:34 crc kubenswrapper[4805]: I1216 12:01:34.611273 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:34 crc kubenswrapper[4805]: I1216 12:01:34.611311 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:34 crc kubenswrapper[4805]: I1216 12:01:34.652159 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:34 crc kubenswrapper[4805]: I1216 12:01:34.884997 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:01:35 crc kubenswrapper[4805]: I1216 12:01:35.218210 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:35 crc kubenswrapper[4805]: I1216 12:01:35.218522 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:35 crc kubenswrapper[4805]: I1216 12:01:35.260738 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:35 crc kubenswrapper[4805]: I1216 12:01:35.886736 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:01:38 crc kubenswrapper[4805]: I1216 12:01:38.148530 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84656bf6cf-5lhcz"] Dec 16 12:01:38 crc kubenswrapper[4805]: I1216 12:01:38.148749 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" podUID="f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" containerName="controller-manager" containerID="cri-o://0bab6011f910b446d040de39b6fbfb78c67c81497045a9b246f4357d75715f3c" gracePeriod=30 Dec 16 12:01:40 crc kubenswrapper[4805]: I1216 12:01:40.176494 4805 patch_prober.go:28] interesting pod/controller-manager-84656bf6cf-5lhcz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Dec 16 12:01:40 crc kubenswrapper[4805]: I1216 12:01:40.176793 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" podUID="f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Dec 16 12:01:42 crc kubenswrapper[4805]: I1216 12:01:42.442964 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:42 crc kubenswrapper[4805]: I1216 12:01:42.497245 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 12:01:42 crc kubenswrapper[4805]: I1216 12:01:42.885706 4805 generic.go:334] "Generic (PLEG): container finished" podID="f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" containerID="0bab6011f910b446d040de39b6fbfb78c67c81497045a9b246f4357d75715f3c" exitCode=0 Dec 16 12:01:42 crc kubenswrapper[4805]: I1216 12:01:42.885784 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" event={"ID":"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2","Type":"ContainerDied","Data":"0bab6011f910b446d040de39b6fbfb78c67c81497045a9b246f4357d75715f3c"} Dec 16 12:01:42 crc kubenswrapper[4805]: I1216 12:01:42.972413 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.004358 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77b4b6574f-555nd"] Dec 16 12:01:43 crc kubenswrapper[4805]: E1216 12:01:43.004581 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" containerName="controller-manager" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.004592 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" containerName="controller-manager" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.004718 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" containerName="controller-manager" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.005260 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.025666 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b4b6574f-555nd"] Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.110615 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-proxy-ca-bundles\") pod \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.110713 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w6ts\" (UniqueName: \"kubernetes.io/projected/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-kube-api-access-4w6ts\") pod \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.110803 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-client-ca\") pod \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.110885 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-serving-cert\") pod \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.111006 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-config\") pod \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\" (UID: \"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2\") " Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.111267 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebd10cfe-2c00-4af1-bb08-6113f89ec087-proxy-ca-bundles\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.111340 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd10cfe-2c00-4af1-bb08-6113f89ec087-serving-cert\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.111408 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptvnf\" (UniqueName: \"kubernetes.io/projected/ebd10cfe-2c00-4af1-bb08-6113f89ec087-kube-api-access-ptvnf\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.111480 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebd10cfe-2c00-4af1-bb08-6113f89ec087-client-ca\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.111579 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd10cfe-2c00-4af1-bb08-6113f89ec087-config\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.112209 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-client-ca" (OuterVolumeSpecName: "client-ca") pod "f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" (UID: "f3d5cbd6-9426-408a-b90d-4f62fa5b08b2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.112287 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" (UID: "f3d5cbd6-9426-408a-b90d-4f62fa5b08b2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.112740 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-config" (OuterVolumeSpecName: "config") pod "f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" (UID: "f3d5cbd6-9426-408a-b90d-4f62fa5b08b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.117287 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-kube-api-access-4w6ts" (OuterVolumeSpecName: "kube-api-access-4w6ts") pod "f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" (UID: "f3d5cbd6-9426-408a-b90d-4f62fa5b08b2"). InnerVolumeSpecName "kube-api-access-4w6ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.117679 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" (UID: "f3d5cbd6-9426-408a-b90d-4f62fa5b08b2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.212875 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebd10cfe-2c00-4af1-bb08-6113f89ec087-client-ca\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.212943 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd10cfe-2c00-4af1-bb08-6113f89ec087-config\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.212970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebd10cfe-2c00-4af1-bb08-6113f89ec087-proxy-ca-bundles\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.212995 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd10cfe-2c00-4af1-bb08-6113f89ec087-serving-cert\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.213035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptvnf\" (UniqueName: \"kubernetes.io/projected/ebd10cfe-2c00-4af1-bb08-6113f89ec087-kube-api-access-ptvnf\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.213074 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.213087 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w6ts\" (UniqueName: \"kubernetes.io/projected/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-kube-api-access-4w6ts\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.213096 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.213104 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.213114 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.214151 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebd10cfe-2c00-4af1-bb08-6113f89ec087-client-ca\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.214421 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebd10cfe-2c00-4af1-bb08-6113f89ec087-config\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.214482 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebd10cfe-2c00-4af1-bb08-6113f89ec087-proxy-ca-bundles\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.216532 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebd10cfe-2c00-4af1-bb08-6113f89ec087-serving-cert\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.232380 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptvnf\" (UniqueName: \"kubernetes.io/projected/ebd10cfe-2c00-4af1-bb08-6113f89ec087-kube-api-access-ptvnf\") pod \"controller-manager-77b4b6574f-555nd\" (UID: \"ebd10cfe-2c00-4af1-bb08-6113f89ec087\") " pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.325117 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.700623 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b4b6574f-555nd"] Dec 16 12:01:43 crc kubenswrapper[4805]: W1216 12:01:43.707948 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebd10cfe_2c00_4af1_bb08_6113f89ec087.slice/crio-3ca3d05ac6339b7b14cfcdb0a26c6d056875735883d935a15f94feab635c9342 WatchSource:0}: Error finding container 3ca3d05ac6339b7b14cfcdb0a26c6d056875735883d935a15f94feab635c9342: Status 404 returned error can't find the container with id 3ca3d05ac6339b7b14cfcdb0a26c6d056875735883d935a15f94feab635c9342 Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.892829 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" event={"ID":"f3d5cbd6-9426-408a-b90d-4f62fa5b08b2","Type":"ContainerDied","Data":"9dc38a186a28af10ac9c0054fc4a9970fb64ff3694fc17b0f8d45171f57fc9cb"} Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.892916 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84656bf6cf-5lhcz" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.893194 4805 scope.go:117] "RemoveContainer" containerID="0bab6011f910b446d040de39b6fbfb78c67c81497045a9b246f4357d75715f3c" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.894302 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" event={"ID":"ebd10cfe-2c00-4af1-bb08-6113f89ec087","Type":"ContainerStarted","Data":"e0dd5adf5d9a1d83029c8b1230eaca2d2bdcdaca34e53d3a83bd4baa6dbc7d56"} Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.894352 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" event={"ID":"ebd10cfe-2c00-4af1-bb08-6113f89ec087","Type":"ContainerStarted","Data":"3ca3d05ac6339b7b14cfcdb0a26c6d056875735883d935a15f94feab635c9342"} Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.895229 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.896924 4805 patch_prober.go:28] interesting pod/controller-manager-77b4b6574f-555nd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.896964 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" podUID="ebd10cfe-2c00-4af1-bb08-6113f89ec087" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.913127 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" podStartSLOduration=5.913112473 podStartE2EDuration="5.913112473s" podCreationTimestamp="2025-12-16 12:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:01:43.912267558 +0000 UTC m=+377.630525383" watchObservedRunningTime="2025-12-16 12:01:43.913112473 +0000 UTC m=+377.631370288" Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.939638 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84656bf6cf-5lhcz"] Dec 16 12:01:43 crc kubenswrapper[4805]: I1216 12:01:43.944690 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84656bf6cf-5lhcz"] Dec 16 12:01:44 crc kubenswrapper[4805]: I1216 12:01:44.530841 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d5cbd6-9426-408a-b90d-4f62fa5b08b2" path="/var/lib/kubelet/pods/f3d5cbd6-9426-408a-b90d-4f62fa5b08b2/volumes" Dec 16 12:01:44 crc kubenswrapper[4805]: I1216 12:01:44.905528 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77b4b6574f-555nd" Dec 16 12:01:45 crc kubenswrapper[4805]: I1216 12:01:45.942030 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tpb8r"] Dec 16 12:01:45 crc kubenswrapper[4805]: I1216 12:01:45.942955 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:45 crc kubenswrapper[4805]: I1216 12:01:45.968361 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tpb8r"] Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.051347 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-registry-certificates\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.051431 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvhjn\" (UniqueName: \"kubernetes.io/projected/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-kube-api-access-rvhjn\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.051457 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-trusted-ca\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.051488 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.051526 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.051551 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-bound-sa-token\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.051581 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.051643 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-registry-tls\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.086215 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.152529 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-registry-tls\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.152589 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-registry-certificates\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.152621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvhjn\" (UniqueName: \"kubernetes.io/projected/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-kube-api-access-rvhjn\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.152636 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-trusted-ca\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.152660 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.152689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.152720 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-bound-sa-token\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.153910 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.154174 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-registry-certificates\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.156270 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-trusted-ca\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.167381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.170271 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-bound-sa-token\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.173339 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvhjn\" (UniqueName: \"kubernetes.io/projected/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-kube-api-access-rvhjn\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.177820 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b416bca-b6bd-4566-86ee-3ec75d5c4dc5-registry-tls\") pod \"image-registry-66df7c8f76-tpb8r\" (UID: \"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5\") " pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.262665 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.660084 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tpb8r"] Dec 16 12:01:46 crc kubenswrapper[4805]: W1216 12:01:46.662314 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b416bca_b6bd_4566_86ee_3ec75d5c4dc5.slice/crio-ba61d7afcd696602f40efd851e7da7a6755335650187b7d9b1ec18b0d23b27a7 WatchSource:0}: Error finding container ba61d7afcd696602f40efd851e7da7a6755335650187b7d9b1ec18b0d23b27a7: Status 404 returned error can't find the container with id ba61d7afcd696602f40efd851e7da7a6755335650187b7d9b1ec18b0d23b27a7 Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.911314 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" event={"ID":"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5","Type":"ContainerStarted","Data":"01c1057d234818affb7fd8841f546d96a0f8b572b20e8476e85c767535575958"} Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.911691 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" event={"ID":"7b416bca-b6bd-4566-86ee-3ec75d5c4dc5","Type":"ContainerStarted","Data":"ba61d7afcd696602f40efd851e7da7a6755335650187b7d9b1ec18b0d23b27a7"} Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.911746 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:01:46 crc kubenswrapper[4805]: I1216 12:01:46.934834 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" podStartSLOduration=1.934817565 podStartE2EDuration="1.934817565s" podCreationTimestamp="2025-12-16 12:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:01:46.931946012 +0000 UTC m=+380.650203817" watchObservedRunningTime="2025-12-16 12:01:46.934817565 +0000 UTC m=+380.653075380" Dec 16 12:01:57 crc kubenswrapper[4805]: I1216 12:01:57.072005 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:01:57 crc kubenswrapper[4805]: I1216 12:01:57.074895 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:02:06 crc kubenswrapper[4805]: I1216 12:02:06.268015 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tpb8r" Dec 16 12:02:06 crc kubenswrapper[4805]: I1216 12:02:06.328670 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sb7p7"] Dec 16 12:02:18 crc kubenswrapper[4805]: I1216 12:02:18.161502 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk"] Dec 16 12:02:18 crc kubenswrapper[4805]: I1216 12:02:18.163020 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" podUID="c41af98c-2818-465e-9b72-238ae35ae902" containerName="route-controller-manager" containerID="cri-o://84dff506d85c9cf058c0e4fb9549cf5b7e3aee1c945ea10cc189238cf8fdf9b9" gracePeriod=30 Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.084261 4805 generic.go:334] "Generic (PLEG): container finished" podID="c41af98c-2818-465e-9b72-238ae35ae902" containerID="84dff506d85c9cf058c0e4fb9549cf5b7e3aee1c945ea10cc189238cf8fdf9b9" exitCode=0 Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.084325 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" event={"ID":"c41af98c-2818-465e-9b72-238ae35ae902","Type":"ContainerDied","Data":"84dff506d85c9cf058c0e4fb9549cf5b7e3aee1c945ea10cc189238cf8fdf9b9"} Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.084484 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" event={"ID":"c41af98c-2818-465e-9b72-238ae35ae902","Type":"ContainerDied","Data":"f1b332009e911854e49d46e995fbc18aed30dc9cd8848f28b70b588de3c743b4"} Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.084497 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b332009e911854e49d46e995fbc18aed30dc9cd8848f28b70b588de3c743b4" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.097492 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.246934 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-client-ca\") pod \"c41af98c-2818-465e-9b72-238ae35ae902\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.247126 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c41af98c-2818-465e-9b72-238ae35ae902-serving-cert\") pod \"c41af98c-2818-465e-9b72-238ae35ae902\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.247271 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw8ls\" (UniqueName: \"kubernetes.io/projected/c41af98c-2818-465e-9b72-238ae35ae902-kube-api-access-xw8ls\") pod \"c41af98c-2818-465e-9b72-238ae35ae902\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.247501 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-config\") pod \"c41af98c-2818-465e-9b72-238ae35ae902\" (UID: \"c41af98c-2818-465e-9b72-238ae35ae902\") " Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.247885 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-client-ca" (OuterVolumeSpecName: "client-ca") pod "c41af98c-2818-465e-9b72-238ae35ae902" (UID: "c41af98c-2818-465e-9b72-238ae35ae902"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.248244 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-config" (OuterVolumeSpecName: "config") pod "c41af98c-2818-465e-9b72-238ae35ae902" (UID: "c41af98c-2818-465e-9b72-238ae35ae902"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.252376 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41af98c-2818-465e-9b72-238ae35ae902-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c41af98c-2818-465e-9b72-238ae35ae902" (UID: "c41af98c-2818-465e-9b72-238ae35ae902"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.252531 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41af98c-2818-465e-9b72-238ae35ae902-kube-api-access-xw8ls" (OuterVolumeSpecName: "kube-api-access-xw8ls") pod "c41af98c-2818-465e-9b72-238ae35ae902" (UID: "c41af98c-2818-465e-9b72-238ae35ae902"). InnerVolumeSpecName "kube-api-access-xw8ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.335034 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm"] Dec 16 12:02:19 crc kubenswrapper[4805]: E1216 12:02:19.335466 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41af98c-2818-465e-9b72-238ae35ae902" containerName="route-controller-manager" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.335486 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41af98c-2818-465e-9b72-238ae35ae902" containerName="route-controller-manager" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.335638 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41af98c-2818-465e-9b72-238ae35ae902" containerName="route-controller-manager" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.336125 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.344871 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm"] Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.349188 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t4kp\" (UniqueName: \"kubernetes.io/projected/6cdcea9a-d7d8-491f-b340-f32757fd8c57-kube-api-access-5t4kp\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.349263 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cdcea9a-d7d8-491f-b340-f32757fd8c57-client-ca\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.349388 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cdcea9a-d7d8-491f-b340-f32757fd8c57-config\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.349430 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cdcea9a-d7d8-491f-b340-f32757fd8c57-serving-cert\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.349541 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw8ls\" (UniqueName: \"kubernetes.io/projected/c41af98c-2818-465e-9b72-238ae35ae902-kube-api-access-xw8ls\") on node \"crc\" DevicePath \"\"" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.349597 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.349611 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c41af98c-2818-465e-9b72-238ae35ae902-client-ca\") on node \"crc\" DevicePath \"\"" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.349623 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c41af98c-2818-465e-9b72-238ae35ae902-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.450699 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cdcea9a-d7d8-491f-b340-f32757fd8c57-config\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.450761 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cdcea9a-d7d8-491f-b340-f32757fd8c57-serving-cert\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.451304 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t4kp\" (UniqueName: \"kubernetes.io/projected/6cdcea9a-d7d8-491f-b340-f32757fd8c57-kube-api-access-5t4kp\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.451356 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cdcea9a-d7d8-491f-b340-f32757fd8c57-client-ca\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.452160 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6cdcea9a-d7d8-491f-b340-f32757fd8c57-client-ca\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.453465 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cdcea9a-d7d8-491f-b340-f32757fd8c57-config\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.454041 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cdcea9a-d7d8-491f-b340-f32757fd8c57-serving-cert\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.470996 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t4kp\" (UniqueName: \"kubernetes.io/projected/6cdcea9a-d7d8-491f-b340-f32757fd8c57-kube-api-access-5t4kp\") pod \"route-controller-manager-7cf84d66c5-m25vm\" (UID: \"6cdcea9a-d7d8-491f-b340-f32757fd8c57\") " pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:19 crc kubenswrapper[4805]: I1216 12:02:19.664023 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:20 crc kubenswrapper[4805]: I1216 12:02:20.089950 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk" Dec 16 12:02:20 crc kubenswrapper[4805]: I1216 12:02:20.118584 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm"] Dec 16 12:02:20 crc kubenswrapper[4805]: I1216 12:02:20.123122 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk"] Dec 16 12:02:20 crc kubenswrapper[4805]: I1216 12:02:20.129555 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ccfb6bb66-c47jk"] Dec 16 12:02:20 crc kubenswrapper[4805]: I1216 12:02:20.530896 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41af98c-2818-465e-9b72-238ae35ae902" path="/var/lib/kubelet/pods/c41af98c-2818-465e-9b72-238ae35ae902/volumes" Dec 16 12:02:21 crc kubenswrapper[4805]: I1216 12:02:21.096754 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" event={"ID":"6cdcea9a-d7d8-491f-b340-f32757fd8c57","Type":"ContainerStarted","Data":"243566fd35b1747dec2f790659f2a8ad1ad4df38e66e811a6a8ee4ee9fe56915"} Dec 16 12:02:21 crc kubenswrapper[4805]: I1216 12:02:21.096809 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" event={"ID":"6cdcea9a-d7d8-491f-b340-f32757fd8c57","Type":"ContainerStarted","Data":"cb7a4b1eadc5765ce368964d2e08a141d1c5eb6a27679b8ba4b3ac89e86b4a50"} Dec 16 12:02:21 crc kubenswrapper[4805]: I1216 12:02:21.097054 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:21 crc kubenswrapper[4805]: I1216 12:02:21.256490 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" Dec 16 12:02:21 crc kubenswrapper[4805]: I1216 12:02:21.283492 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cf84d66c5-m25vm" podStartSLOduration=3.283470392 podStartE2EDuration="3.283470392s" podCreationTimestamp="2025-12-16 12:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:02:21.173575833 +0000 UTC m=+414.891833648" watchObservedRunningTime="2025-12-16 12:02:21.283470392 +0000 UTC m=+415.001728217" Dec 16 12:02:27 crc kubenswrapper[4805]: I1216 12:02:27.071101 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:02:27 crc kubenswrapper[4805]: I1216 12:02:27.071758 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:02:27 crc kubenswrapper[4805]: I1216 12:02:27.071818 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:02:27 crc kubenswrapper[4805]: I1216 12:02:27.072530 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cf9ea01ecb876171a7dbc132a7c5a959cb0acd82019690d41d23472548d1ff3"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:02:27 crc kubenswrapper[4805]: I1216 12:02:27.072590 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://4cf9ea01ecb876171a7dbc132a7c5a959cb0acd82019690d41d23472548d1ff3" gracePeriod=600 Dec 16 12:02:28 crc kubenswrapper[4805]: I1216 12:02:28.175481 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="4cf9ea01ecb876171a7dbc132a7c5a959cb0acd82019690d41d23472548d1ff3" exitCode=0 Dec 16 12:02:28 crc kubenswrapper[4805]: I1216 12:02:28.175581 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"4cf9ea01ecb876171a7dbc132a7c5a959cb0acd82019690d41d23472548d1ff3"} Dec 16 12:02:28 crc kubenswrapper[4805]: I1216 12:02:28.175807 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"53bfcd9e55d02e7133c707fa2852d5ae127e10c11b775a7d5cb91d28e2238a69"} Dec 16 12:02:28 crc kubenswrapper[4805]: I1216 12:02:28.175831 4805 scope.go:117] "RemoveContainer" containerID="bca031f9f0e79c1d655242f4140bafd8394a6c45f8e6f993c4925d58545bb923" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:31.378739 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" podUID="25907081-958d-4fb5-a8d2-ce8454adedfb" containerName="registry" containerID="cri-o://2449ce921aa9b33b0ab481837dfd81cd444baa386f95cea55efc1db02defbee4" gracePeriod=30 Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.202843 4805 generic.go:334] "Generic (PLEG): container finished" podID="25907081-958d-4fb5-a8d2-ce8454adedfb" containerID="2449ce921aa9b33b0ab481837dfd81cd444baa386f95cea55efc1db02defbee4" exitCode=0 Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.202939 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" event={"ID":"25907081-958d-4fb5-a8d2-ce8454adedfb","Type":"ContainerDied","Data":"2449ce921aa9b33b0ab481837dfd81cd444baa386f95cea55efc1db02defbee4"} Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.339041 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.466897 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-trusted-ca\") pod \"25907081-958d-4fb5-a8d2-ce8454adedfb\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.467171 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"25907081-958d-4fb5-a8d2-ce8454adedfb\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.467232 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-certificates\") pod \"25907081-958d-4fb5-a8d2-ce8454adedfb\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.467266 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-bound-sa-token\") pod \"25907081-958d-4fb5-a8d2-ce8454adedfb\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.467291 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/25907081-958d-4fb5-a8d2-ce8454adedfb-installation-pull-secrets\") pod \"25907081-958d-4fb5-a8d2-ce8454adedfb\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.467332 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw587\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-kube-api-access-jw587\") pod \"25907081-958d-4fb5-a8d2-ce8454adedfb\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.467355 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/25907081-958d-4fb5-a8d2-ce8454adedfb-ca-trust-extracted\") pod \"25907081-958d-4fb5-a8d2-ce8454adedfb\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.467383 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-tls\") pod \"25907081-958d-4fb5-a8d2-ce8454adedfb\" (UID: \"25907081-958d-4fb5-a8d2-ce8454adedfb\") " Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.468929 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "25907081-958d-4fb5-a8d2-ce8454adedfb" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.469537 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "25907081-958d-4fb5-a8d2-ce8454adedfb" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.474472 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-kube-api-access-jw587" (OuterVolumeSpecName: "kube-api-access-jw587") pod "25907081-958d-4fb5-a8d2-ce8454adedfb" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb"). InnerVolumeSpecName "kube-api-access-jw587". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.474705 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25907081-958d-4fb5-a8d2-ce8454adedfb-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "25907081-958d-4fb5-a8d2-ce8454adedfb" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.474937 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "25907081-958d-4fb5-a8d2-ce8454adedfb" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.475129 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "25907081-958d-4fb5-a8d2-ce8454adedfb" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.490723 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "25907081-958d-4fb5-a8d2-ce8454adedfb" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.500071 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25907081-958d-4fb5-a8d2-ce8454adedfb-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "25907081-958d-4fb5-a8d2-ce8454adedfb" (UID: "25907081-958d-4fb5-a8d2-ce8454adedfb"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.569007 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.569049 4805 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.569062 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.569076 4805 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/25907081-958d-4fb5-a8d2-ce8454adedfb-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.569087 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw587\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-kube-api-access-jw587\") on node \"crc\" DevicePath \"\"" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.569099 4805 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/25907081-958d-4fb5-a8d2-ce8454adedfb-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 16 12:02:32 crc kubenswrapper[4805]: I1216 12:02:32.569110 4805 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/25907081-958d-4fb5-a8d2-ce8454adedfb-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 16 12:02:33 crc kubenswrapper[4805]: I1216 12:02:33.210656 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" event={"ID":"25907081-958d-4fb5-a8d2-ce8454adedfb","Type":"ContainerDied","Data":"db1c9b47b929aea0a273a06775a86c674a2932297dc67f305c69c200e9533496"} Dec 16 12:02:33 crc kubenswrapper[4805]: I1216 12:02:33.211315 4805 scope.go:117] "RemoveContainer" containerID="2449ce921aa9b33b0ab481837dfd81cd444baa386f95cea55efc1db02defbee4" Dec 16 12:02:33 crc kubenswrapper[4805]: I1216 12:02:33.211434 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sb7p7" Dec 16 12:02:33 crc kubenswrapper[4805]: I1216 12:02:33.234657 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sb7p7"] Dec 16 12:02:33 crc kubenswrapper[4805]: I1216 12:02:33.238692 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sb7p7"] Dec 16 12:02:34 crc kubenswrapper[4805]: I1216 12:02:34.544411 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25907081-958d-4fb5-a8d2-ce8454adedfb" path="/var/lib/kubelet/pods/25907081-958d-4fb5-a8d2-ce8454adedfb/volumes" Dec 16 12:04:27 crc kubenswrapper[4805]: I1216 12:04:27.071579 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:04:27 crc kubenswrapper[4805]: I1216 12:04:27.072185 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:04:57 crc kubenswrapper[4805]: I1216 12:04:57.071668 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:04:57 crc kubenswrapper[4805]: I1216 12:04:57.072319 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:05:27 crc kubenswrapper[4805]: I1216 12:05:27.071668 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:05:27 crc kubenswrapper[4805]: I1216 12:05:27.072253 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:05:27 crc kubenswrapper[4805]: I1216 12:05:27.072339 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:05:27 crc kubenswrapper[4805]: I1216 12:05:27.073039 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53bfcd9e55d02e7133c707fa2852d5ae127e10c11b775a7d5cb91d28e2238a69"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:05:27 crc kubenswrapper[4805]: I1216 12:05:27.073102 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://53bfcd9e55d02e7133c707fa2852d5ae127e10c11b775a7d5cb91d28e2238a69" gracePeriod=600 Dec 16 12:05:27 crc kubenswrapper[4805]: I1216 12:05:27.863024 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="53bfcd9e55d02e7133c707fa2852d5ae127e10c11b775a7d5cb91d28e2238a69" exitCode=0 Dec 16 12:05:27 crc kubenswrapper[4805]: I1216 12:05:27.863073 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"53bfcd9e55d02e7133c707fa2852d5ae127e10c11b775a7d5cb91d28e2238a69"} Dec 16 12:05:27 crc kubenswrapper[4805]: I1216 12:05:27.863110 4805 scope.go:117] "RemoveContainer" containerID="4cf9ea01ecb876171a7dbc132a7c5a959cb0acd82019690d41d23472548d1ff3" Dec 16 12:05:28 crc kubenswrapper[4805]: I1216 12:05:28.869551 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"92251edb6d76688dd19f1388e5267454cafee90ea7ff5bb9b248b2631cfd26c8"} Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.699638 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mbkms"] Dec 16 12:07:21 crc kubenswrapper[4805]: E1216 12:07:21.700611 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25907081-958d-4fb5-a8d2-ce8454adedfb" containerName="registry" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.700631 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="25907081-958d-4fb5-a8d2-ce8454adedfb" containerName="registry" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.700771 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="25907081-958d-4fb5-a8d2-ce8454adedfb" containerName="registry" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.701385 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-mbkms" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.704489 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-6wgm7"] Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.705130 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.705399 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-6wgm7" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.705851 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-tb9vm" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.711078 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.716890 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-q2tmx" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.737722 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9m9jg"] Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.738501 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-9m9jg" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.743084 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mbkms"] Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.746174 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4knlt" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.760649 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9m9jg"] Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.763975 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-6wgm7"] Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.811358 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6qlr\" (UniqueName: \"kubernetes.io/projected/5a731837-c343-49f3-8bd9-26b04af9b2d0-kube-api-access-j6qlr\") pod \"cert-manager-cainjector-7f985d654d-mbkms\" (UID: \"5a731837-c343-49f3-8bd9-26b04af9b2d0\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mbkms" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.811452 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msqgr\" (UniqueName: \"kubernetes.io/projected/31df596c-e28f-4424-8e69-09cadc77cd6d-kube-api-access-msqgr\") pod \"cert-manager-5b446d88c5-6wgm7\" (UID: \"31df596c-e28f-4424-8e69-09cadc77cd6d\") " pod="cert-manager/cert-manager-5b446d88c5-6wgm7" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.912993 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msqgr\" (UniqueName: \"kubernetes.io/projected/31df596c-e28f-4424-8e69-09cadc77cd6d-kube-api-access-msqgr\") pod \"cert-manager-5b446d88c5-6wgm7\" (UID: \"31df596c-e28f-4424-8e69-09cadc77cd6d\") " pod="cert-manager/cert-manager-5b446d88c5-6wgm7" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.913340 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q97pb\" (UniqueName: \"kubernetes.io/projected/dd1457ce-6da4-4b68-9bb5-7c57738c0ace-kube-api-access-q97pb\") pod \"cert-manager-webhook-5655c58dd6-9m9jg\" (UID: \"dd1457ce-6da4-4b68-9bb5-7c57738c0ace\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9m9jg" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.913494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6qlr\" (UniqueName: \"kubernetes.io/projected/5a731837-c343-49f3-8bd9-26b04af9b2d0-kube-api-access-j6qlr\") pod \"cert-manager-cainjector-7f985d654d-mbkms\" (UID: \"5a731837-c343-49f3-8bd9-26b04af9b2d0\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mbkms" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.934427 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6qlr\" (UniqueName: \"kubernetes.io/projected/5a731837-c343-49f3-8bd9-26b04af9b2d0-kube-api-access-j6qlr\") pod \"cert-manager-cainjector-7f985d654d-mbkms\" (UID: \"5a731837-c343-49f3-8bd9-26b04af9b2d0\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mbkms" Dec 16 12:07:21 crc kubenswrapper[4805]: I1216 12:07:21.934799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msqgr\" (UniqueName: \"kubernetes.io/projected/31df596c-e28f-4424-8e69-09cadc77cd6d-kube-api-access-msqgr\") pod \"cert-manager-5b446d88c5-6wgm7\" (UID: \"31df596c-e28f-4424-8e69-09cadc77cd6d\") " pod="cert-manager/cert-manager-5b446d88c5-6wgm7" Dec 16 12:07:22 crc kubenswrapper[4805]: I1216 12:07:22.014630 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q97pb\" (UniqueName: \"kubernetes.io/projected/dd1457ce-6da4-4b68-9bb5-7c57738c0ace-kube-api-access-q97pb\") pod \"cert-manager-webhook-5655c58dd6-9m9jg\" (UID: \"dd1457ce-6da4-4b68-9bb5-7c57738c0ace\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9m9jg" Dec 16 12:07:22 crc kubenswrapper[4805]: I1216 12:07:22.019571 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-mbkms" Dec 16 12:07:22 crc kubenswrapper[4805]: I1216 12:07:22.032026 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-6wgm7" Dec 16 12:07:22 crc kubenswrapper[4805]: I1216 12:07:22.039733 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q97pb\" (UniqueName: \"kubernetes.io/projected/dd1457ce-6da4-4b68-9bb5-7c57738c0ace-kube-api-access-q97pb\") pod \"cert-manager-webhook-5655c58dd6-9m9jg\" (UID: \"dd1457ce-6da4-4b68-9bb5-7c57738c0ace\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9m9jg" Dec 16 12:07:22 crc kubenswrapper[4805]: I1216 12:07:22.051656 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-9m9jg" Dec 16 12:07:22 crc kubenswrapper[4805]: I1216 12:07:22.416076 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9m9jg"] Dec 16 12:07:22 crc kubenswrapper[4805]: W1216 12:07:22.430588 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd1457ce_6da4_4b68_9bb5_7c57738c0ace.slice/crio-9705612883d0bf86c6f62572b7d585e0454ab292ceee0b4a4452f81f51204ca1 WatchSource:0}: Error finding container 9705612883d0bf86c6f62572b7d585e0454ab292ceee0b4a4452f81f51204ca1: Status 404 returned error can't find the container with id 9705612883d0bf86c6f62572b7d585e0454ab292ceee0b4a4452f81f51204ca1 Dec 16 12:07:22 crc kubenswrapper[4805]: I1216 12:07:22.439159 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 12:07:22 crc kubenswrapper[4805]: I1216 12:07:22.498958 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mbkms"] Dec 16 12:07:22 crc kubenswrapper[4805]: W1216 12:07:22.505297 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a731837_c343_49f3_8bd9_26b04af9b2d0.slice/crio-5fb65971412b49468b718e4453edff69ad82e0a97056271e6fbf6c7885468bc9 WatchSource:0}: Error finding container 5fb65971412b49468b718e4453edff69ad82e0a97056271e6fbf6c7885468bc9: Status 404 returned error can't find the container with id 5fb65971412b49468b718e4453edff69ad82e0a97056271e6fbf6c7885468bc9 Dec 16 12:07:22 crc kubenswrapper[4805]: I1216 12:07:22.630753 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-mbkms" event={"ID":"5a731837-c343-49f3-8bd9-26b04af9b2d0","Type":"ContainerStarted","Data":"5fb65971412b49468b718e4453edff69ad82e0a97056271e6fbf6c7885468bc9"} Dec 16 12:07:22 crc kubenswrapper[4805]: I1216 12:07:22.632065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-9m9jg" event={"ID":"dd1457ce-6da4-4b68-9bb5-7c57738c0ace","Type":"ContainerStarted","Data":"9705612883d0bf86c6f62572b7d585e0454ab292ceee0b4a4452f81f51204ca1"} Dec 16 12:07:22 crc kubenswrapper[4805]: I1216 12:07:22.657509 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-6wgm7"] Dec 16 12:07:22 crc kubenswrapper[4805]: W1216 12:07:22.664897 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31df596c_e28f_4424_8e69_09cadc77cd6d.slice/crio-9deab39470801c6985fbfad78cba54d1fff9afd3e1f7f121dd9c2733aa0313f7 WatchSource:0}: Error finding container 9deab39470801c6985fbfad78cba54d1fff9afd3e1f7f121dd9c2733aa0313f7: Status 404 returned error can't find the container with id 9deab39470801c6985fbfad78cba54d1fff9afd3e1f7f121dd9c2733aa0313f7 Dec 16 12:07:23 crc kubenswrapper[4805]: I1216 12:07:23.640340 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-6wgm7" event={"ID":"31df596c-e28f-4424-8e69-09cadc77cd6d","Type":"ContainerStarted","Data":"9deab39470801c6985fbfad78cba54d1fff9afd3e1f7f121dd9c2733aa0313f7"} Dec 16 12:07:26 crc kubenswrapper[4805]: I1216 12:07:26.822251 4805 scope.go:117] "RemoveContainer" containerID="84dff506d85c9cf058c0e4fb9549cf5b7e3aee1c945ea10cc189238cf8fdf9b9" Dec 16 12:07:26 crc kubenswrapper[4805]: I1216 12:07:26.876584 4805 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 16 12:07:29 crc kubenswrapper[4805]: I1216 12:07:29.718369 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-6wgm7" event={"ID":"31df596c-e28f-4424-8e69-09cadc77cd6d","Type":"ContainerStarted","Data":"1dfabbe99f73b27c833a75f274cc50690c67c86a1842e19c7804651e46da0313"} Dec 16 12:07:29 crc kubenswrapper[4805]: I1216 12:07:29.722347 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-9m9jg" event={"ID":"dd1457ce-6da4-4b68-9bb5-7c57738c0ace","Type":"ContainerStarted","Data":"63b00f9c6a0ff835c594a86915aaced9af65d6e791583f703f5cceb8391dd3c9"} Dec 16 12:07:29 crc kubenswrapper[4805]: I1216 12:07:29.723036 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-9m9jg" Dec 16 12:07:29 crc kubenswrapper[4805]: I1216 12:07:29.732251 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-mbkms" event={"ID":"5a731837-c343-49f3-8bd9-26b04af9b2d0","Type":"ContainerStarted","Data":"5dd914c3f299f8b8e2cfe9f1ed65b33bf683765e53ea819669c747fe73c29994"} Dec 16 12:07:29 crc kubenswrapper[4805]: I1216 12:07:29.744359 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-6wgm7" podStartSLOduration=2.792001704 podStartE2EDuration="8.744341033s" podCreationTimestamp="2025-12-16 12:07:21 +0000 UTC" firstStartedPulling="2025-12-16 12:07:22.667785729 +0000 UTC m=+716.386043534" lastFinishedPulling="2025-12-16 12:07:28.620125038 +0000 UTC m=+722.338382863" observedRunningTime="2025-12-16 12:07:29.738370462 +0000 UTC m=+723.456628277" watchObservedRunningTime="2025-12-16 12:07:29.744341033 +0000 UTC m=+723.462598858" Dec 16 12:07:29 crc kubenswrapper[4805]: I1216 12:07:29.777191 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-9m9jg" podStartSLOduration=2.705333758 podStartE2EDuration="8.777135931s" podCreationTimestamp="2025-12-16 12:07:21 +0000 UTC" firstStartedPulling="2025-12-16 12:07:22.438865874 +0000 UTC m=+716.157123679" lastFinishedPulling="2025-12-16 12:07:28.510668047 +0000 UTC m=+722.228925852" observedRunningTime="2025-12-16 12:07:29.774745102 +0000 UTC m=+723.493002907" watchObservedRunningTime="2025-12-16 12:07:29.777135931 +0000 UTC m=+723.495393746" Dec 16 12:07:29 crc kubenswrapper[4805]: I1216 12:07:29.799821 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-mbkms" podStartSLOduration=2.8006137840000003 podStartE2EDuration="8.799793269s" podCreationTimestamp="2025-12-16 12:07:21 +0000 UTC" firstStartedPulling="2025-12-16 12:07:22.507314853 +0000 UTC m=+716.225572658" lastFinishedPulling="2025-12-16 12:07:28.506494338 +0000 UTC m=+722.224752143" observedRunningTime="2025-12-16 12:07:29.79353927 +0000 UTC m=+723.511797095" watchObservedRunningTime="2025-12-16 12:07:29.799793269 +0000 UTC m=+723.518051104" Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.470910 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lwjrh"] Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.472049 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovn-controller" containerID="cri-o://87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69" gracePeriod=30 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.472228 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="sbdb" containerID="cri-o://f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31" gracePeriod=30 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.472240 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="northd" containerID="cri-o://2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7" gracePeriod=30 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.472211 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="kube-rbac-proxy-node" containerID="cri-o://cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a" gracePeriod=30 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.472222 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovn-acl-logging" containerID="cri-o://7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8" gracePeriod=30 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.472193 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca" gracePeriod=30 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.472749 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="nbdb" containerID="cri-o://52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65" gracePeriod=30 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.513574 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" containerID="cri-o://ae1de15ea708b8695292548dc5dd04046670fdd69f9d040e21ff16aeabfa912d" gracePeriod=30 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.745422 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vffbc_369287d8-0d6d-483f-8c4b-5439ae4d065c/kube-multus/1.log" Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.746006 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vffbc_369287d8-0d6d-483f-8c4b-5439ae4d065c/kube-multus/0.log" Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.746050 4805 generic.go:334] "Generic (PLEG): container finished" podID="369287d8-0d6d-483f-8c4b-5439ae4d065c" containerID="d216456ebbb67a5782522dbfd434d574659cd483a4a2ca25f3a7a5bc963f8bd3" exitCode=2 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.746105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vffbc" event={"ID":"369287d8-0d6d-483f-8c4b-5439ae4d065c","Type":"ContainerDied","Data":"d216456ebbb67a5782522dbfd434d574659cd483a4a2ca25f3a7a5bc963f8bd3"} Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.746155 4805 scope.go:117] "RemoveContainer" containerID="a8d4943320416c622c082bcd441cac6c91bb5648426b5800d93beaa993951879" Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.746644 4805 scope.go:117] "RemoveContainer" containerID="d216456ebbb67a5782522dbfd434d574659cd483a4a2ca25f3a7a5bc963f8bd3" Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.750641 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovnkube-controller/2.log" Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.756490 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovn-acl-logging/0.log" Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.757119 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovn-controller/0.log" Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.757612 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="ae1de15ea708b8695292548dc5dd04046670fdd69f9d040e21ff16aeabfa912d" exitCode=0 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.757639 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31" exitCode=0 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.757649 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65" exitCode=0 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.757658 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7" exitCode=0 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.757666 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca" exitCode=0 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.757674 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a" exitCode=0 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.757683 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8" exitCode=143 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.757691 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerID="87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69" exitCode=143 Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.758353 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"ae1de15ea708b8695292548dc5dd04046670fdd69f9d040e21ff16aeabfa912d"} Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.758389 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31"} Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.758404 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65"} Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.758417 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7"} Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.758428 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca"} Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.758438 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a"} Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.758446 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8"} Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.758455 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69"} Dec 16 12:07:31 crc kubenswrapper[4805]: I1216 12:07:31.781405 4805 scope.go:117] "RemoveContainer" containerID="9264cbe68589c85ed2bbb75f1cf1fa48d393dd9cf1a162aa2ec31160ab82fa57" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.236177 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovn-acl-logging/0.log" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.237260 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovn-controller/0.log" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.237791 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.295839 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rpfvg"] Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.296191 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="sbdb" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296226 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="sbdb" Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.296249 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovn-acl-logging" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296261 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovn-acl-logging" Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.296274 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296286 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.296303 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="kube-rbac-proxy-ovn-metrics" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296316 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="kube-rbac-proxy-ovn-metrics" Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.296334 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="nbdb" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296344 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="nbdb" Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.296358 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovn-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296369 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovn-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.296382 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="kube-rbac-proxy-node" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296392 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="kube-rbac-proxy-node" Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.296416 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="kubecfg-setup" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296428 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="kubecfg-setup" Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.296442 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296453 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.296464 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="northd" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296474 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="northd" Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.296488 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296499 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296656 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296673 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296684 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovn-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296698 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="kube-rbac-proxy-node" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296730 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296746 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="sbdb" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296761 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="kube-rbac-proxy-ovn-metrics" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296774 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296806 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovn-acl-logging" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296825 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="northd" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.296850 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="nbdb" Dec 16 12:07:32 crc kubenswrapper[4805]: E1216 12:07:32.297042 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.297057 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" containerName="ovnkube-controller" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.299959 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308682 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-etc-openvswitch\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308719 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-netns\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308768 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-script-lib\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308801 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-ovn\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308832 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-var-lib-openvswitch\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308849 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-bin\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308832 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308852 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308891 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308952 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308932 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-log-socket" (OuterVolumeSpecName: "log-socket") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308883 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-log-socket\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309004 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-openvswitch\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309022 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-node-log\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309048 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-ovn-kubernetes\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309074 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-config\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309096 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovn-node-metrics-cert\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309127 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309175 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-env-overrides\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309202 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-slash\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309223 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-kubelet\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309260 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-systemd-units\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309287 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-netd\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309315 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlg5j\" (UniqueName: \"kubernetes.io/projected/cb7da1ad-f74d-471f-a98f-274cef7fe393-kube-api-access-tlg5j\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309337 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-systemd\") pod \"cb7da1ad-f74d-471f-a98f-274cef7fe393\" (UID: \"cb7da1ad-f74d-471f-a98f-274cef7fe393\") " Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.308984 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309084 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309339 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309454 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309476 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74647395-a5f5-49a4-b397-4f3d8576b67a-ovn-node-metrics-cert\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309493 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-slash\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-cni-netd\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309546 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl69m\" (UniqueName: \"kubernetes.io/projected/74647395-a5f5-49a4-b397-4f3d8576b67a-kube-api-access-tl69m\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309563 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-run-openvswitch\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309584 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74647395-a5f5-49a4-b397-4f3d8576b67a-env-overrides\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309601 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-etc-openvswitch\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309609 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309618 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-run-netns\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309700 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74647395-a5f5-49a4-b397-4f3d8576b67a-ovnkube-script-lib\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309850 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309877 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-kubelet\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309894 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-node-log" (OuterVolumeSpecName: "node-log") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309904 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-cni-bin\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309921 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309926 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74647395-a5f5-49a4-b397-4f3d8576b67a-ovnkube-config\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309948 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-node-log\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.309993 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-log-socket\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310037 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-run-systemd\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310069 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310098 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-run-ovn\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310122 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-var-lib-openvswitch\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310177 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-systemd-units\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310221 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310237 4805 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310250 4805 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310262 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310273 4805 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310284 4805 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310296 4805 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310305 4805 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-log-socket\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310316 4805 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310326 4805 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-node-log\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310338 4805 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310350 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310410 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-slash" (OuterVolumeSpecName: "host-slash") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310439 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310466 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310494 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.310570 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.315809 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.319590 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7da1ad-f74d-471f-a98f-274cef7fe393-kube-api-access-tlg5j" (OuterVolumeSpecName: "kube-api-access-tlg5j") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "kube-api-access-tlg5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.323295 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cb7da1ad-f74d-471f-a98f-274cef7fe393" (UID: "cb7da1ad-f74d-471f-a98f-274cef7fe393"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411002 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-etc-openvswitch\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411465 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-run-netns\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411131 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-etc-openvswitch\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411505 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-run-netns\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411551 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74647395-a5f5-49a4-b397-4f3d8576b67a-ovnkube-script-lib\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411649 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-kubelet\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411683 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-cni-bin\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411713 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-node-log\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411727 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-kubelet\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411790 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-cni-bin\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411735 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74647395-a5f5-49a4-b397-4f3d8576b67a-ovnkube-config\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-node-log\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411847 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-log-socket\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411908 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-run-systemd\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411950 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-log-socket\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411954 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.411981 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412010 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-run-systemd\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412023 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-run-ovn\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412057 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-var-lib-openvswitch\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412124 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-systemd-units\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412167 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412177 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-var-lib-openvswitch\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412211 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74647395-a5f5-49a4-b397-4f3d8576b67a-ovn-node-metrics-cert\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-systemd-units\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412239 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-slash\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412272 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-cni-netd\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412335 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl69m\" (UniqueName: \"kubernetes.io/projected/74647395-a5f5-49a4-b397-4f3d8576b67a-kube-api-access-tl69m\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412376 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-run-openvswitch\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412405 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74647395-a5f5-49a4-b397-4f3d8576b67a-ovnkube-config\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412413 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74647395-a5f5-49a4-b397-4f3d8576b67a-env-overrides\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412498 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb7da1ad-f74d-471f-a98f-274cef7fe393-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412521 4805 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412534 4805 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-slash\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412546 4805 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412557 4805 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412569 4805 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412580 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlg5j\" (UniqueName: \"kubernetes.io/projected/cb7da1ad-f74d-471f-a98f-274cef7fe393-kube-api-access-tlg5j\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412591 4805 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb7da1ad-f74d-471f-a98f-274cef7fe393-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412127 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-run-ovn\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412635 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-run-openvswitch\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412706 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-slash\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412739 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-cni-netd\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.412272 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74647395-a5f5-49a4-b397-4f3d8576b67a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.413080 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74647395-a5f5-49a4-b397-4f3d8576b67a-env-overrides\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.413527 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74647395-a5f5-49a4-b397-4f3d8576b67a-ovnkube-script-lib\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.415508 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74647395-a5f5-49a4-b397-4f3d8576b67a-ovn-node-metrics-cert\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.429823 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl69m\" (UniqueName: \"kubernetes.io/projected/74647395-a5f5-49a4-b397-4f3d8576b67a-kube-api-access-tl69m\") pod \"ovnkube-node-rpfvg\" (UID: \"74647395-a5f5-49a4-b397-4f3d8576b67a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.616200 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:32 crc kubenswrapper[4805]: W1216 12:07:32.640388 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74647395_a5f5_49a4_b397_4f3d8576b67a.slice/crio-e14b368e1accfb3e4de11efb26a21c680aea4e7e66c96827de71f0e17db10ad5 WatchSource:0}: Error finding container e14b368e1accfb3e4de11efb26a21c680aea4e7e66c96827de71f0e17db10ad5: Status 404 returned error can't find the container with id e14b368e1accfb3e4de11efb26a21c680aea4e7e66c96827de71f0e17db10ad5 Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.764870 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vffbc_369287d8-0d6d-483f-8c4b-5439ae4d065c/kube-multus/1.log" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.764974 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vffbc" event={"ID":"369287d8-0d6d-483f-8c4b-5439ae4d065c","Type":"ContainerStarted","Data":"5d105fd167396cf288032abb1e9be96cf659bcef379af4c08cfadb7a29127a09"} Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.771977 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovn-acl-logging/0.log" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.772386 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lwjrh_cb7da1ad-f74d-471f-a98f-274cef7fe393/ovn-controller/0.log" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.772681 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" event={"ID":"cb7da1ad-f74d-471f-a98f-274cef7fe393","Type":"ContainerDied","Data":"e5d3d11761d9ed9309f82d1141e5396f91bd9f21427e53c1ce87fcb15f1b577a"} Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.772711 4805 scope.go:117] "RemoveContainer" containerID="ae1de15ea708b8695292548dc5dd04046670fdd69f9d040e21ff16aeabfa912d" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.772809 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lwjrh" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.774765 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" event={"ID":"74647395-a5f5-49a4-b397-4f3d8576b67a","Type":"ContainerStarted","Data":"e14b368e1accfb3e4de11efb26a21c680aea4e7e66c96827de71f0e17db10ad5"} Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.794122 4805 scope.go:117] "RemoveContainer" containerID="f8cf8ed4b713c3d5d946c859ba6ccb82f121105989a57bdbf04558e52c652a31" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.807808 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lwjrh"] Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.818293 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lwjrh"] Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.869676 4805 scope.go:117] "RemoveContainer" containerID="52766d87ef3c79f9462fb0c701882d2922f4b379ea4bd37eb178948ef329be65" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.888391 4805 scope.go:117] "RemoveContainer" containerID="2c7924c590af323a0d95931efa14ba98062f75e3b60518077b694da532aefdf7" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.901431 4805 scope.go:117] "RemoveContainer" containerID="b51ce4f186f3588ee4a99802877e4526b576eab624e5781b7288c249ed9b42ca" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.918411 4805 scope.go:117] "RemoveContainer" containerID="cc6524a372ff5dabf3c2d6addfdb05e9df43769758b1dcca0dbdde9dd91cb94a" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.933313 4805 scope.go:117] "RemoveContainer" containerID="7351543d84382f85a1a62e2e3df51c566d74a79508c775d222b15cbc9b40f6f8" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.946440 4805 scope.go:117] "RemoveContainer" containerID="87b99c60d4d0537fba651e7f72edd2c7d8f2760c6b54a93bba0c5f8777ca7a69" Dec 16 12:07:32 crc kubenswrapper[4805]: I1216 12:07:32.961154 4805 scope.go:117] "RemoveContainer" containerID="10d4979b2c1fcd8dd57eeb34551360d90752b4d12d1b3d52aea260d28286ed99" Dec 16 12:07:33 crc kubenswrapper[4805]: I1216 12:07:33.782043 4805 generic.go:334] "Generic (PLEG): container finished" podID="74647395-a5f5-49a4-b397-4f3d8576b67a" containerID="20abd42dc952124ee97272e62d29295def54f6ce452f0660901b10815231dc6a" exitCode=0 Dec 16 12:07:33 crc kubenswrapper[4805]: I1216 12:07:33.782117 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" event={"ID":"74647395-a5f5-49a4-b397-4f3d8576b67a","Type":"ContainerDied","Data":"20abd42dc952124ee97272e62d29295def54f6ce452f0660901b10815231dc6a"} Dec 16 12:07:34 crc kubenswrapper[4805]: I1216 12:07:34.532009 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb7da1ad-f74d-471f-a98f-274cef7fe393" path="/var/lib/kubelet/pods/cb7da1ad-f74d-471f-a98f-274cef7fe393/volumes" Dec 16 12:07:34 crc kubenswrapper[4805]: I1216 12:07:34.796076 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" event={"ID":"74647395-a5f5-49a4-b397-4f3d8576b67a","Type":"ContainerStarted","Data":"649f044b93a733e90da2acecaa97ec4fed4c85009f80e60a019d00520baf8cca"} Dec 16 12:07:34 crc kubenswrapper[4805]: I1216 12:07:34.796126 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" event={"ID":"74647395-a5f5-49a4-b397-4f3d8576b67a","Type":"ContainerStarted","Data":"c4a80f61e80b36db297cf87915b08a2f1f8a6ce4da8ad85144d8da7b6b742980"} Dec 16 12:07:34 crc kubenswrapper[4805]: I1216 12:07:34.796200 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" event={"ID":"74647395-a5f5-49a4-b397-4f3d8576b67a","Type":"ContainerStarted","Data":"869ac0dce0e8152d93985570e7f78c538d53b0a754633c21749ae7ab786fcbef"} Dec 16 12:07:34 crc kubenswrapper[4805]: I1216 12:07:34.796215 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" event={"ID":"74647395-a5f5-49a4-b397-4f3d8576b67a","Type":"ContainerStarted","Data":"fb8a1a2233757f49d820a8b6403f64f30b60777fa6136b675d4c4055c59cb557"} Dec 16 12:07:34 crc kubenswrapper[4805]: I1216 12:07:34.796236 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" event={"ID":"74647395-a5f5-49a4-b397-4f3d8576b67a","Type":"ContainerStarted","Data":"d1816ce8fcd882e72f32f9b4d431343f750189976d17c9bd6fe72aa63b0faea7"} Dec 16 12:07:34 crc kubenswrapper[4805]: I1216 12:07:34.796248 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" event={"ID":"74647395-a5f5-49a4-b397-4f3d8576b67a","Type":"ContainerStarted","Data":"1280001934ac168aa9e49f6ecc33a801081d705c04a0504f227db8797a3490fd"} Dec 16 12:07:36 crc kubenswrapper[4805]: I1216 12:07:36.812238 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" event={"ID":"74647395-a5f5-49a4-b397-4f3d8576b67a","Type":"ContainerStarted","Data":"090a53dcf1d0ed0b508474e2c3a7a35f3aee4a918b243fb02cc0573ed2ed1822"} Dec 16 12:07:37 crc kubenswrapper[4805]: I1216 12:07:37.055471 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-9m9jg" Dec 16 12:07:41 crc kubenswrapper[4805]: I1216 12:07:41.847249 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" event={"ID":"74647395-a5f5-49a4-b397-4f3d8576b67a","Type":"ContainerStarted","Data":"06c7db8eeb247e944a1706efa4ad5dec4a4870102b33e3da41e27098c7dbca04"} Dec 16 12:07:41 crc kubenswrapper[4805]: I1216 12:07:41.847810 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:41 crc kubenswrapper[4805]: I1216 12:07:41.847828 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:41 crc kubenswrapper[4805]: I1216 12:07:41.847839 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:41 crc kubenswrapper[4805]: I1216 12:07:41.877435 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:41 crc kubenswrapper[4805]: I1216 12:07:41.877993 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:07:41 crc kubenswrapper[4805]: I1216 12:07:41.883285 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" podStartSLOduration=9.883263606 podStartE2EDuration="9.883263606s" podCreationTimestamp="2025-12-16 12:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:07:41.877022658 +0000 UTC m=+735.595280473" watchObservedRunningTime="2025-12-16 12:07:41.883263606 +0000 UTC m=+735.601521431" Dec 16 12:07:57 crc kubenswrapper[4805]: I1216 12:07:57.072028 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:07:57 crc kubenswrapper[4805]: I1216 12:07:57.072583 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:08:02 crc kubenswrapper[4805]: I1216 12:08:02.647651 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rpfvg" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.292885 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct"] Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.294935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.297090 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.310984 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct"] Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.474430 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.474484 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.474523 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjthx\" (UniqueName: \"kubernetes.io/projected/67333641-ba72-4136-9667-27fe128bbd8f-kube-api-access-qjthx\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.575471 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.575820 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.576028 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjthx\" (UniqueName: \"kubernetes.io/projected/67333641-ba72-4136-9667-27fe128bbd8f-kube-api-access-qjthx\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.576258 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.576414 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.595291 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjthx\" (UniqueName: \"kubernetes.io/projected/67333641-ba72-4136-9667-27fe128bbd8f-kube-api-access-qjthx\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.612426 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:25 crc kubenswrapper[4805]: I1216 12:08:25.873120 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct"] Dec 16 12:08:26 crc kubenswrapper[4805]: I1216 12:08:26.118844 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" event={"ID":"67333641-ba72-4136-9667-27fe128bbd8f","Type":"ContainerStarted","Data":"2aea833a56ef8b9205e46dd65e27ff9efa694332457f5d9fb36d5c2819ca96b2"} Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.071638 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.072010 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.129515 4805 generic.go:334] "Generic (PLEG): container finished" podID="67333641-ba72-4136-9667-27fe128bbd8f" containerID="ab98c3a226273cb010d625b7694b07b1f17d18c7850c4b21cff4d9cc7398a9e8" exitCode=0 Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.129584 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" event={"ID":"67333641-ba72-4136-9667-27fe128bbd8f","Type":"ContainerDied","Data":"ab98c3a226273cb010d625b7694b07b1f17d18c7850c4b21cff4d9cc7398a9e8"} Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.473801 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cc7dm"] Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.484392 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.507774 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cc7dm"] Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.624277 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7bz8\" (UniqueName: \"kubernetes.io/projected/5c6654b0-9ff4-44a4-9962-5520f53e7691-kube-api-access-w7bz8\") pod \"redhat-operators-cc7dm\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.624350 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-catalog-content\") pod \"redhat-operators-cc7dm\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.624396 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-utilities\") pod \"redhat-operators-cc7dm\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.725610 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7bz8\" (UniqueName: \"kubernetes.io/projected/5c6654b0-9ff4-44a4-9962-5520f53e7691-kube-api-access-w7bz8\") pod \"redhat-operators-cc7dm\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.725656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-catalog-content\") pod \"redhat-operators-cc7dm\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.725684 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-utilities\") pod \"redhat-operators-cc7dm\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.726155 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-utilities\") pod \"redhat-operators-cc7dm\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.726229 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-catalog-content\") pod \"redhat-operators-cc7dm\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.760538 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7bz8\" (UniqueName: \"kubernetes.io/projected/5c6654b0-9ff4-44a4-9962-5520f53e7691-kube-api-access-w7bz8\") pod \"redhat-operators-cc7dm\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:27 crc kubenswrapper[4805]: I1216 12:08:27.814924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:28 crc kubenswrapper[4805]: I1216 12:08:28.021971 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cc7dm"] Dec 16 12:08:28 crc kubenswrapper[4805]: I1216 12:08:28.137085 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc7dm" event={"ID":"5c6654b0-9ff4-44a4-9962-5520f53e7691","Type":"ContainerStarted","Data":"339fea27d7a10ca1973200db2e7fcf3647f65da396abeb8bf161e4c380002e65"} Dec 16 12:08:29 crc kubenswrapper[4805]: I1216 12:08:29.144291 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c6654b0-9ff4-44a4-9962-5520f53e7691" containerID="ec148b161f678a09baa58c02584e23961b64da0da1b05209a3d62193ab072b37" exitCode=0 Dec 16 12:08:29 crc kubenswrapper[4805]: I1216 12:08:29.144343 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc7dm" event={"ID":"5c6654b0-9ff4-44a4-9962-5520f53e7691","Type":"ContainerDied","Data":"ec148b161f678a09baa58c02584e23961b64da0da1b05209a3d62193ab072b37"} Dec 16 12:08:29 crc kubenswrapper[4805]: I1216 12:08:29.146566 4805 generic.go:334] "Generic (PLEG): container finished" podID="67333641-ba72-4136-9667-27fe128bbd8f" containerID="12e454652689fe10ef1bb566315babdd9a08e4ffc3159959b79c3a70f783b78e" exitCode=0 Dec 16 12:08:29 crc kubenswrapper[4805]: I1216 12:08:29.146603 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" event={"ID":"67333641-ba72-4136-9667-27fe128bbd8f","Type":"ContainerDied","Data":"12e454652689fe10ef1bb566315babdd9a08e4ffc3159959b79c3a70f783b78e"} Dec 16 12:08:30 crc kubenswrapper[4805]: I1216 12:08:30.154035 4805 generic.go:334] "Generic (PLEG): container finished" podID="67333641-ba72-4136-9667-27fe128bbd8f" containerID="baaafd91ca18f9a238862cbb02b157db8bf16f948e723fb7d9cadf3c7b063d37" exitCode=0 Dec 16 12:08:30 crc kubenswrapper[4805]: I1216 12:08:30.154081 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" event={"ID":"67333641-ba72-4136-9667-27fe128bbd8f","Type":"ContainerDied","Data":"baaafd91ca18f9a238862cbb02b157db8bf16f948e723fb7d9cadf3c7b063d37"} Dec 16 12:08:30 crc kubenswrapper[4805]: I1216 12:08:30.156089 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc7dm" event={"ID":"5c6654b0-9ff4-44a4-9962-5520f53e7691","Type":"ContainerStarted","Data":"93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a"} Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.163292 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c6654b0-9ff4-44a4-9962-5520f53e7691" containerID="93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a" exitCode=0 Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.163392 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc7dm" event={"ID":"5c6654b0-9ff4-44a4-9962-5520f53e7691","Type":"ContainerDied","Data":"93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a"} Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.366311 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.470702 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-bundle\") pod \"67333641-ba72-4136-9667-27fe128bbd8f\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.470757 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjthx\" (UniqueName: \"kubernetes.io/projected/67333641-ba72-4136-9667-27fe128bbd8f-kube-api-access-qjthx\") pod \"67333641-ba72-4136-9667-27fe128bbd8f\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.470913 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-util\") pod \"67333641-ba72-4136-9667-27fe128bbd8f\" (UID: \"67333641-ba72-4136-9667-27fe128bbd8f\") " Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.471405 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-bundle" (OuterVolumeSpecName: "bundle") pod "67333641-ba72-4136-9667-27fe128bbd8f" (UID: "67333641-ba72-4136-9667-27fe128bbd8f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.476333 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67333641-ba72-4136-9667-27fe128bbd8f-kube-api-access-qjthx" (OuterVolumeSpecName: "kube-api-access-qjthx") pod "67333641-ba72-4136-9667-27fe128bbd8f" (UID: "67333641-ba72-4136-9667-27fe128bbd8f"). InnerVolumeSpecName "kube-api-access-qjthx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.483877 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-util" (OuterVolumeSpecName: "util") pod "67333641-ba72-4136-9667-27fe128bbd8f" (UID: "67333641-ba72-4136-9667-27fe128bbd8f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.572558 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-util\") on node \"crc\" DevicePath \"\"" Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.572603 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67333641-ba72-4136-9667-27fe128bbd8f-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:08:31 crc kubenswrapper[4805]: I1216 12:08:31.572615 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjthx\" (UniqueName: \"kubernetes.io/projected/67333641-ba72-4136-9667-27fe128bbd8f-kube-api-access-qjthx\") on node \"crc\" DevicePath \"\"" Dec 16 12:08:32 crc kubenswrapper[4805]: I1216 12:08:32.172387 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" event={"ID":"67333641-ba72-4136-9667-27fe128bbd8f","Type":"ContainerDied","Data":"2aea833a56ef8b9205e46dd65e27ff9efa694332457f5d9fb36d5c2819ca96b2"} Dec 16 12:08:32 crc kubenswrapper[4805]: I1216 12:08:32.172434 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aea833a56ef8b9205e46dd65e27ff9efa694332457f5d9fb36d5c2819ca96b2" Dec 16 12:08:32 crc kubenswrapper[4805]: I1216 12:08:32.172510 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct" Dec 16 12:08:33 crc kubenswrapper[4805]: I1216 12:08:33.180649 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc7dm" event={"ID":"5c6654b0-9ff4-44a4-9962-5520f53e7691","Type":"ContainerStarted","Data":"da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87"} Dec 16 12:08:33 crc kubenswrapper[4805]: I1216 12:08:33.201355 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cc7dm" podStartSLOduration=3.215591225 podStartE2EDuration="6.201325687s" podCreationTimestamp="2025-12-16 12:08:27 +0000 UTC" firstStartedPulling="2025-12-16 12:08:29.14570817 +0000 UTC m=+782.863965975" lastFinishedPulling="2025-12-16 12:08:32.131442632 +0000 UTC m=+785.849700437" observedRunningTime="2025-12-16 12:08:33.198583298 +0000 UTC m=+786.916841163" watchObservedRunningTime="2025-12-16 12:08:33.201325687 +0000 UTC m=+786.919583572" Dec 16 12:08:35 crc kubenswrapper[4805]: I1216 12:08:35.863020 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-dvv9t"] Dec 16 12:08:35 crc kubenswrapper[4805]: E1216 12:08:35.863686 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67333641-ba72-4136-9667-27fe128bbd8f" containerName="extract" Dec 16 12:08:35 crc kubenswrapper[4805]: I1216 12:08:35.863703 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="67333641-ba72-4136-9667-27fe128bbd8f" containerName="extract" Dec 16 12:08:35 crc kubenswrapper[4805]: E1216 12:08:35.863721 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67333641-ba72-4136-9667-27fe128bbd8f" containerName="util" Dec 16 12:08:35 crc kubenswrapper[4805]: I1216 12:08:35.863728 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="67333641-ba72-4136-9667-27fe128bbd8f" containerName="util" Dec 16 12:08:35 crc kubenswrapper[4805]: E1216 12:08:35.863745 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67333641-ba72-4136-9667-27fe128bbd8f" containerName="pull" Dec 16 12:08:35 crc kubenswrapper[4805]: I1216 12:08:35.863753 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="67333641-ba72-4136-9667-27fe128bbd8f" containerName="pull" Dec 16 12:08:35 crc kubenswrapper[4805]: I1216 12:08:35.863889 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="67333641-ba72-4136-9667-27fe128bbd8f" containerName="extract" Dec 16 12:08:35 crc kubenswrapper[4805]: I1216 12:08:35.864391 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-dvv9t" Dec 16 12:08:35 crc kubenswrapper[4805]: I1216 12:08:35.868075 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 16 12:08:35 crc kubenswrapper[4805]: I1216 12:08:35.868126 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-m78n2" Dec 16 12:08:35 crc kubenswrapper[4805]: I1216 12:08:35.869672 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 16 12:08:35 crc kubenswrapper[4805]: I1216 12:08:35.890478 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-dvv9t"] Dec 16 12:08:36 crc kubenswrapper[4805]: I1216 12:08:36.027344 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvnll\" (UniqueName: \"kubernetes.io/projected/4bc64030-6cd9-48cb-8665-3424d3f6897c-kube-api-access-nvnll\") pod \"nmstate-operator-6769fb99d-dvv9t\" (UID: \"4bc64030-6cd9-48cb-8665-3424d3f6897c\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-dvv9t" Dec 16 12:08:36 crc kubenswrapper[4805]: I1216 12:08:36.129520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvnll\" (UniqueName: \"kubernetes.io/projected/4bc64030-6cd9-48cb-8665-3424d3f6897c-kube-api-access-nvnll\") pod \"nmstate-operator-6769fb99d-dvv9t\" (UID: \"4bc64030-6cd9-48cb-8665-3424d3f6897c\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-dvv9t" Dec 16 12:08:36 crc kubenswrapper[4805]: I1216 12:08:36.155508 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvnll\" (UniqueName: \"kubernetes.io/projected/4bc64030-6cd9-48cb-8665-3424d3f6897c-kube-api-access-nvnll\") pod \"nmstate-operator-6769fb99d-dvv9t\" (UID: \"4bc64030-6cd9-48cb-8665-3424d3f6897c\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-dvv9t" Dec 16 12:08:36 crc kubenswrapper[4805]: I1216 12:08:36.182011 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-dvv9t" Dec 16 12:08:36 crc kubenswrapper[4805]: I1216 12:08:36.692820 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-dvv9t"] Dec 16 12:08:36 crc kubenswrapper[4805]: W1216 12:08:36.713307 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc64030_6cd9_48cb_8665_3424d3f6897c.slice/crio-d571e929607f5182fc4a43f682e17d8e78736adac9107568099ca5c01b68d8de WatchSource:0}: Error finding container d571e929607f5182fc4a43f682e17d8e78736adac9107568099ca5c01b68d8de: Status 404 returned error can't find the container with id d571e929607f5182fc4a43f682e17d8e78736adac9107568099ca5c01b68d8de Dec 16 12:08:37 crc kubenswrapper[4805]: I1216 12:08:37.200658 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-dvv9t" event={"ID":"4bc64030-6cd9-48cb-8665-3424d3f6897c","Type":"ContainerStarted","Data":"d571e929607f5182fc4a43f682e17d8e78736adac9107568099ca5c01b68d8de"} Dec 16 12:08:37 crc kubenswrapper[4805]: I1216 12:08:37.815576 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:37 crc kubenswrapper[4805]: I1216 12:08:37.815839 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:37 crc kubenswrapper[4805]: I1216 12:08:37.925101 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:38 crc kubenswrapper[4805]: I1216 12:08:38.245190 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:40 crc kubenswrapper[4805]: I1216 12:08:40.260956 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cc7dm"] Dec 16 12:08:40 crc kubenswrapper[4805]: I1216 12:08:40.262177 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cc7dm" podUID="5c6654b0-9ff4-44a4-9962-5520f53e7691" containerName="registry-server" containerID="cri-o://da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87" gracePeriod=2 Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.070031 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.223510 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c6654b0-9ff4-44a4-9962-5520f53e7691" containerID="da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87" exitCode=0 Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.223612 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc7dm" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.223634 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc7dm" event={"ID":"5c6654b0-9ff4-44a4-9962-5520f53e7691","Type":"ContainerDied","Data":"da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87"} Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.224308 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc7dm" event={"ID":"5c6654b0-9ff4-44a4-9962-5520f53e7691","Type":"ContainerDied","Data":"339fea27d7a10ca1973200db2e7fcf3647f65da396abeb8bf161e4c380002e65"} Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.224335 4805 scope.go:117] "RemoveContainer" containerID="da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.226222 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-dvv9t" event={"ID":"4bc64030-6cd9-48cb-8665-3424d3f6897c","Type":"ContainerStarted","Data":"a71ed259a8aa68436d100be357c10714dd57e54616696e92712c07224be5c570"} Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.245435 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-dvv9t" podStartSLOduration=2.108815473 podStartE2EDuration="6.245415994s" podCreationTimestamp="2025-12-16 12:08:35 +0000 UTC" firstStartedPulling="2025-12-16 12:08:36.715051646 +0000 UTC m=+790.433309451" lastFinishedPulling="2025-12-16 12:08:40.851652167 +0000 UTC m=+794.569909972" observedRunningTime="2025-12-16 12:08:41.241982365 +0000 UTC m=+794.960240170" watchObservedRunningTime="2025-12-16 12:08:41.245415994 +0000 UTC m=+794.963673809" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.250933 4805 scope.go:117] "RemoveContainer" containerID="93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.267316 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-catalog-content\") pod \"5c6654b0-9ff4-44a4-9962-5520f53e7691\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.267413 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-utilities\") pod \"5c6654b0-9ff4-44a4-9962-5520f53e7691\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.267466 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7bz8\" (UniqueName: \"kubernetes.io/projected/5c6654b0-9ff4-44a4-9962-5520f53e7691-kube-api-access-w7bz8\") pod \"5c6654b0-9ff4-44a4-9962-5520f53e7691\" (UID: \"5c6654b0-9ff4-44a4-9962-5520f53e7691\") " Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.268567 4805 scope.go:117] "RemoveContainer" containerID="ec148b161f678a09baa58c02584e23961b64da0da1b05209a3d62193ab072b37" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.269172 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-utilities" (OuterVolumeSpecName: "utilities") pod "5c6654b0-9ff4-44a4-9962-5520f53e7691" (UID: "5c6654b0-9ff4-44a4-9962-5520f53e7691"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.272094 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c6654b0-9ff4-44a4-9962-5520f53e7691-kube-api-access-w7bz8" (OuterVolumeSpecName: "kube-api-access-w7bz8") pod "5c6654b0-9ff4-44a4-9962-5520f53e7691" (UID: "5c6654b0-9ff4-44a4-9962-5520f53e7691"). InnerVolumeSpecName "kube-api-access-w7bz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.287514 4805 scope.go:117] "RemoveContainer" containerID="da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87" Dec 16 12:08:41 crc kubenswrapper[4805]: E1216 12:08:41.287974 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87\": container with ID starting with da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87 not found: ID does not exist" containerID="da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.288007 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87"} err="failed to get container status \"da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87\": rpc error: code = NotFound desc = could not find container \"da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87\": container with ID starting with da33b12f13fbceb4ca50fd8d0c26d2bcc1528b94191fe901afe95ee0579b0c87 not found: ID does not exist" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.288027 4805 scope.go:117] "RemoveContainer" containerID="93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a" Dec 16 12:08:41 crc kubenswrapper[4805]: E1216 12:08:41.288323 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a\": container with ID starting with 93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a not found: ID does not exist" containerID="93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.288373 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a"} err="failed to get container status \"93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a\": rpc error: code = NotFound desc = could not find container \"93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a\": container with ID starting with 93e2e00dc2d0bf37f3c19ff4eb9ba55bffe302ee14527b501e5aa5f98f81771a not found: ID does not exist" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.288401 4805 scope.go:117] "RemoveContainer" containerID="ec148b161f678a09baa58c02584e23961b64da0da1b05209a3d62193ab072b37" Dec 16 12:08:41 crc kubenswrapper[4805]: E1216 12:08:41.288736 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec148b161f678a09baa58c02584e23961b64da0da1b05209a3d62193ab072b37\": container with ID starting with ec148b161f678a09baa58c02584e23961b64da0da1b05209a3d62193ab072b37 not found: ID does not exist" containerID="ec148b161f678a09baa58c02584e23961b64da0da1b05209a3d62193ab072b37" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.288826 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec148b161f678a09baa58c02584e23961b64da0da1b05209a3d62193ab072b37"} err="failed to get container status \"ec148b161f678a09baa58c02584e23961b64da0da1b05209a3d62193ab072b37\": rpc error: code = NotFound desc = could not find container \"ec148b161f678a09baa58c02584e23961b64da0da1b05209a3d62193ab072b37\": container with ID starting with ec148b161f678a09baa58c02584e23961b64da0da1b05209a3d62193ab072b37 not found: ID does not exist" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.368600 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.368634 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7bz8\" (UniqueName: \"kubernetes.io/projected/5c6654b0-9ff4-44a4-9962-5520f53e7691-kube-api-access-w7bz8\") on node \"crc\" DevicePath \"\"" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.790559 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c6654b0-9ff4-44a4-9962-5520f53e7691" (UID: "5c6654b0-9ff4-44a4-9962-5520f53e7691"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.858735 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cc7dm"] Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.861729 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cc7dm"] Dec 16 12:08:41 crc kubenswrapper[4805]: I1216 12:08:41.874236 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c6654b0-9ff4-44a4-9962-5520f53e7691-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:08:42 crc kubenswrapper[4805]: I1216 12:08:42.531642 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c6654b0-9ff4-44a4-9962-5520f53e7691" path="/var/lib/kubelet/pods/5c6654b0-9ff4-44a4-9962-5520f53e7691/volumes" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.499065 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh"] Dec 16 12:08:45 crc kubenswrapper[4805]: E1216 12:08:45.499610 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6654b0-9ff4-44a4-9962-5520f53e7691" containerName="extract-content" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.499626 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6654b0-9ff4-44a4-9962-5520f53e7691" containerName="extract-content" Dec 16 12:08:45 crc kubenswrapper[4805]: E1216 12:08:45.499643 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6654b0-9ff4-44a4-9962-5520f53e7691" containerName="extract-utilities" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.499650 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6654b0-9ff4-44a4-9962-5520f53e7691" containerName="extract-utilities" Dec 16 12:08:45 crc kubenswrapper[4805]: E1216 12:08:45.499667 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6654b0-9ff4-44a4-9962-5520f53e7691" containerName="registry-server" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.499674 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6654b0-9ff4-44a4-9962-5520f53e7691" containerName="registry-server" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.499805 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6654b0-9ff4-44a4-9962-5520f53e7691" containerName="registry-server" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.500461 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.506109 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-sfx4x" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.510845 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh"] Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.518823 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v"] Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.519780 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.521655 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/212a3b56-a221-4818-a463-90cc9e4e46e5-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-m8h9v\" (UID: \"212a3b56-a221-4818-a463-90cc9e4e46e5\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.521821 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77km\" (UniqueName: \"kubernetes.io/projected/31cb3421-4893-45b4-bb8d-8afd77fe9cb2-kube-api-access-j77km\") pod \"nmstate-metrics-7f7f7578db-z2kdh\" (UID: \"31cb3421-4893-45b4-bb8d-8afd77fe9cb2\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.521920 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vwx4\" (UniqueName: \"kubernetes.io/projected/212a3b56-a221-4818-a463-90cc9e4e46e5-kube-api-access-7vwx4\") pod \"nmstate-webhook-f8fb84555-m8h9v\" (UID: \"212a3b56-a221-4818-a463-90cc9e4e46e5\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.531466 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.554586 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v"] Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.560738 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lxsrn"] Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.561483 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.622866 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/212a3b56-a221-4818-a463-90cc9e4e46e5-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-m8h9v\" (UID: \"212a3b56-a221-4818-a463-90cc9e4e46e5\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.623201 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j77km\" (UniqueName: \"kubernetes.io/projected/31cb3421-4893-45b4-bb8d-8afd77fe9cb2-kube-api-access-j77km\") pod \"nmstate-metrics-7f7f7578db-z2kdh\" (UID: \"31cb3421-4893-45b4-bb8d-8afd77fe9cb2\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.623361 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vwx4\" (UniqueName: \"kubernetes.io/projected/212a3b56-a221-4818-a463-90cc9e4e46e5-kube-api-access-7vwx4\") pod \"nmstate-webhook-f8fb84555-m8h9v\" (UID: \"212a3b56-a221-4818-a463-90cc9e4e46e5\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" Dec 16 12:08:45 crc kubenswrapper[4805]: E1216 12:08:45.623117 4805 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 16 12:08:45 crc kubenswrapper[4805]: E1216 12:08:45.623642 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/212a3b56-a221-4818-a463-90cc9e4e46e5-tls-key-pair podName:212a3b56-a221-4818-a463-90cc9e4e46e5 nodeName:}" failed. No retries permitted until 2025-12-16 12:08:46.123602925 +0000 UTC m=+799.841860730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/212a3b56-a221-4818-a463-90cc9e4e46e5-tls-key-pair") pod "nmstate-webhook-f8fb84555-m8h9v" (UID: "212a3b56-a221-4818-a463-90cc9e4e46e5") : secret "openshift-nmstate-webhook" not found Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.647315 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j77km\" (UniqueName: \"kubernetes.io/projected/31cb3421-4893-45b4-bb8d-8afd77fe9cb2-kube-api-access-j77km\") pod \"nmstate-metrics-7f7f7578db-z2kdh\" (UID: \"31cb3421-4893-45b4-bb8d-8afd77fe9cb2\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.667861 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vwx4\" (UniqueName: \"kubernetes.io/projected/212a3b56-a221-4818-a463-90cc9e4e46e5-kube-api-access-7vwx4\") pod \"nmstate-webhook-f8fb84555-m8h9v\" (UID: \"212a3b56-a221-4818-a463-90cc9e4e46e5\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.689396 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk"] Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.690118 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.700611 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.701058 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.702159 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-6wgkb" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.716634 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk"] Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.724836 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ff660e50-710f-494e-aa58-66abf3868df5-dbus-socket\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.724913 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95lff\" (UniqueName: \"kubernetes.io/projected/ff660e50-710f-494e-aa58-66abf3868df5-kube-api-access-95lff\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.724953 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ff660e50-710f-494e-aa58-66abf3868df5-nmstate-lock\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.725035 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ff660e50-710f-494e-aa58-66abf3868df5-ovs-socket\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.816692 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.826438 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95lff\" (UniqueName: \"kubernetes.io/projected/ff660e50-710f-494e-aa58-66abf3868df5-kube-api-access-95lff\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.826694 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ff660e50-710f-494e-aa58-66abf3868df5-nmstate-lock\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.826794 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ff660e50-710f-494e-aa58-66abf3868df5-nmstate-lock\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.826898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbc1c7a-286e-4428-a745-32211779781e-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-8v4hk\" (UID: \"8fbc1c7a-286e-4428-a745-32211779781e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.827019 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8fbc1c7a-286e-4428-a745-32211779781e-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-8v4hk\" (UID: \"8fbc1c7a-286e-4428-a745-32211779781e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.827211 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ff660e50-710f-494e-aa58-66abf3868df5-ovs-socket\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.827337 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ff660e50-710f-494e-aa58-66abf3868df5-dbus-socket\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.827433 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqjwv\" (UniqueName: \"kubernetes.io/projected/8fbc1c7a-286e-4428-a745-32211779781e-kube-api-access-sqjwv\") pod \"nmstate-console-plugin-6ff7998486-8v4hk\" (UID: \"8fbc1c7a-286e-4428-a745-32211779781e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.827283 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ff660e50-710f-494e-aa58-66abf3868df5-ovs-socket\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.827599 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ff660e50-710f-494e-aa58-66abf3868df5-dbus-socket\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.873869 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95lff\" (UniqueName: \"kubernetes.io/projected/ff660e50-710f-494e-aa58-66abf3868df5-kube-api-access-95lff\") pod \"nmstate-handler-lxsrn\" (UID: \"ff660e50-710f-494e-aa58-66abf3868df5\") " pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.897453 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.916598 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5ffd7578b8-ljflf"] Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.917455 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:45 crc kubenswrapper[4805]: W1216 12:08:45.917677 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff660e50_710f_494e_aa58_66abf3868df5.slice/crio-68b65ae11ff98c1749ced7b13bdf575041aab13eb41b0ed716b692deef98c715 WatchSource:0}: Error finding container 68b65ae11ff98c1749ced7b13bdf575041aab13eb41b0ed716b692deef98c715: Status 404 returned error can't find the container with id 68b65ae11ff98c1749ced7b13bdf575041aab13eb41b0ed716b692deef98c715 Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.928461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbc1c7a-286e-4428-a745-32211779781e-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-8v4hk\" (UID: \"8fbc1c7a-286e-4428-a745-32211779781e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.928503 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8fbc1c7a-286e-4428-a745-32211779781e-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-8v4hk\" (UID: \"8fbc1c7a-286e-4428-a745-32211779781e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.928573 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqjwv\" (UniqueName: \"kubernetes.io/projected/8fbc1c7a-286e-4428-a745-32211779781e-kube-api-access-sqjwv\") pod \"nmstate-console-plugin-6ff7998486-8v4hk\" (UID: \"8fbc1c7a-286e-4428-a745-32211779781e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:45 crc kubenswrapper[4805]: E1216 12:08:45.928999 4805 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 16 12:08:45 crc kubenswrapper[4805]: E1216 12:08:45.929219 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fbc1c7a-286e-4428-a745-32211779781e-plugin-serving-cert podName:8fbc1c7a-286e-4428-a745-32211779781e nodeName:}" failed. No retries permitted until 2025-12-16 12:08:46.429200563 +0000 UTC m=+800.147458368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/8fbc1c7a-286e-4428-a745-32211779781e-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-8v4hk" (UID: "8fbc1c7a-286e-4428-a745-32211779781e") : secret "plugin-serving-cert" not found Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.930289 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8fbc1c7a-286e-4428-a745-32211779781e-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-8v4hk\" (UID: \"8fbc1c7a-286e-4428-a745-32211779781e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.949788 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqjwv\" (UniqueName: \"kubernetes.io/projected/8fbc1c7a-286e-4428-a745-32211779781e-kube-api-access-sqjwv\") pod \"nmstate-console-plugin-6ff7998486-8v4hk\" (UID: \"8fbc1c7a-286e-4428-a745-32211779781e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:45 crc kubenswrapper[4805]: I1216 12:08:45.980815 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5ffd7578b8-ljflf"] Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.033655 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7fk6\" (UniqueName: \"kubernetes.io/projected/acc51fc9-54bc-4478-b288-de10a9446d3c-kube-api-access-k7fk6\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.033907 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-trusted-ca-bundle\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.033927 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acc51fc9-54bc-4478-b288-de10a9446d3c-console-serving-cert\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.033974 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-service-ca\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.034006 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acc51fc9-54bc-4478-b288-de10a9446d3c-console-oauth-config\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.034219 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-console-config\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.034263 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-oauth-serving-cert\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.109350 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh"] Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.135674 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-console-config\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.135730 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-oauth-serving-cert\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.135774 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/212a3b56-a221-4818-a463-90cc9e4e46e5-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-m8h9v\" (UID: \"212a3b56-a221-4818-a463-90cc9e4e46e5\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.135792 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7fk6\" (UniqueName: \"kubernetes.io/projected/acc51fc9-54bc-4478-b288-de10a9446d3c-kube-api-access-k7fk6\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.135810 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-trusted-ca-bundle\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.135826 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acc51fc9-54bc-4478-b288-de10a9446d3c-console-serving-cert\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.135844 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-service-ca\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.135884 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acc51fc9-54bc-4478-b288-de10a9446d3c-console-oauth-config\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.136930 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-oauth-serving-cert\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.137873 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-service-ca\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.137905 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-trusted-ca-bundle\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.138027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acc51fc9-54bc-4478-b288-de10a9446d3c-console-config\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.141099 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acc51fc9-54bc-4478-b288-de10a9446d3c-console-oauth-config\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.141127 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/212a3b56-a221-4818-a463-90cc9e4e46e5-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-m8h9v\" (UID: \"212a3b56-a221-4818-a463-90cc9e4e46e5\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.259381 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lxsrn" event={"ID":"ff660e50-710f-494e-aa58-66abf3868df5","Type":"ContainerStarted","Data":"68b65ae11ff98c1749ced7b13bdf575041aab13eb41b0ed716b692deef98c715"} Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.260903 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh" event={"ID":"31cb3421-4893-45b4-bb8d-8afd77fe9cb2","Type":"ContainerStarted","Data":"4f013ec521f754a9e5587446d96b9a86329d684a40fc7173f8fcd401aa8f16a5"} Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.436854 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.439409 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbc1c7a-286e-4428-a745-32211779781e-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-8v4hk\" (UID: \"8fbc1c7a-286e-4428-a745-32211779781e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.443733 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbc1c7a-286e-4428-a745-32211779781e-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-8v4hk\" (UID: \"8fbc1c7a-286e-4428-a745-32211779781e\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:46 crc kubenswrapper[4805]: I1216 12:08:46.610407 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" Dec 16 12:08:47 crc kubenswrapper[4805]: I1216 12:08:47.568406 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acc51fc9-54bc-4478-b288-de10a9446d3c-console-serving-cert\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:47 crc kubenswrapper[4805]: I1216 12:08:47.572054 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7fk6\" (UniqueName: \"kubernetes.io/projected/acc51fc9-54bc-4478-b288-de10a9446d3c-kube-api-access-k7fk6\") pod \"console-5ffd7578b8-ljflf\" (UID: \"acc51fc9-54bc-4478-b288-de10a9446d3c\") " pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:47 crc kubenswrapper[4805]: I1216 12:08:47.743900 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:48 crc kubenswrapper[4805]: I1216 12:08:48.035320 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v"] Dec 16 12:08:48 crc kubenswrapper[4805]: W1216 12:08:48.043373 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod212a3b56_a221_4818_a463_90cc9e4e46e5.slice/crio-6c5acddd30c23613e6f2c1b25e10d483313481c5a1f6d1151d0f9c152fe842c1 WatchSource:0}: Error finding container 6c5acddd30c23613e6f2c1b25e10d483313481c5a1f6d1151d0f9c152fe842c1: Status 404 returned error can't find the container with id 6c5acddd30c23613e6f2c1b25e10d483313481c5a1f6d1151d0f9c152fe842c1 Dec 16 12:08:48 crc kubenswrapper[4805]: I1216 12:08:48.103233 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk"] Dec 16 12:08:48 crc kubenswrapper[4805]: W1216 12:08:48.109634 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fbc1c7a_286e_4428_a745_32211779781e.slice/crio-0d237b35ca557c5aa80d50f275248b5984e80a290080f611a41ee2bc0711ec3e WatchSource:0}: Error finding container 0d237b35ca557c5aa80d50f275248b5984e80a290080f611a41ee2bc0711ec3e: Status 404 returned error can't find the container with id 0d237b35ca557c5aa80d50f275248b5984e80a290080f611a41ee2bc0711ec3e Dec 16 12:08:48 crc kubenswrapper[4805]: I1216 12:08:48.207727 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5ffd7578b8-ljflf"] Dec 16 12:08:48 crc kubenswrapper[4805]: W1216 12:08:48.209729 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacc51fc9_54bc_4478_b288_de10a9446d3c.slice/crio-7ce760e35588c563e71496d8751011a48a83854f0018a8f50d98fe91a073e14d WatchSource:0}: Error finding container 7ce760e35588c563e71496d8751011a48a83854f0018a8f50d98fe91a073e14d: Status 404 returned error can't find the container with id 7ce760e35588c563e71496d8751011a48a83854f0018a8f50d98fe91a073e14d Dec 16 12:08:48 crc kubenswrapper[4805]: I1216 12:08:48.276538 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" event={"ID":"8fbc1c7a-286e-4428-a745-32211779781e","Type":"ContainerStarted","Data":"0d237b35ca557c5aa80d50f275248b5984e80a290080f611a41ee2bc0711ec3e"} Dec 16 12:08:48 crc kubenswrapper[4805]: I1216 12:08:48.277694 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" event={"ID":"212a3b56-a221-4818-a463-90cc9e4e46e5","Type":"ContainerStarted","Data":"6c5acddd30c23613e6f2c1b25e10d483313481c5a1f6d1151d0f9c152fe842c1"} Dec 16 12:08:48 crc kubenswrapper[4805]: I1216 12:08:48.278848 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5ffd7578b8-ljflf" event={"ID":"acc51fc9-54bc-4478-b288-de10a9446d3c","Type":"ContainerStarted","Data":"7ce760e35588c563e71496d8751011a48a83854f0018a8f50d98fe91a073e14d"} Dec 16 12:08:49 crc kubenswrapper[4805]: I1216 12:08:49.285495 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5ffd7578b8-ljflf" event={"ID":"acc51fc9-54bc-4478-b288-de10a9446d3c","Type":"ContainerStarted","Data":"a36b72dad12a2226f63097ca86b19fabe39abbd1680688300d52d74644dd8e4c"} Dec 16 12:08:49 crc kubenswrapper[4805]: I1216 12:08:49.310758 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5ffd7578b8-ljflf" podStartSLOduration=4.31073623 podStartE2EDuration="4.31073623s" podCreationTimestamp="2025-12-16 12:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:08:49.304972925 +0000 UTC m=+803.023230730" watchObservedRunningTime="2025-12-16 12:08:49.31073623 +0000 UTC m=+803.028994055" Dec 16 12:08:52 crc kubenswrapper[4805]: I1216 12:08:52.305266 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" event={"ID":"212a3b56-a221-4818-a463-90cc9e4e46e5","Type":"ContainerStarted","Data":"473e208003891b8e0068b35cb9ca26a1a4e8c219cbc9ede072e70de96fa12bcc"} Dec 16 12:08:52 crc kubenswrapper[4805]: I1216 12:08:52.305564 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" Dec 16 12:08:52 crc kubenswrapper[4805]: I1216 12:08:52.307532 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lxsrn" event={"ID":"ff660e50-710f-494e-aa58-66abf3868df5","Type":"ContainerStarted","Data":"670e95e58294b62312b615d03c3207aac8a2bda8d6e94829bda0b8b13ddb2bef"} Dec 16 12:08:52 crc kubenswrapper[4805]: I1216 12:08:52.307656 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:08:52 crc kubenswrapper[4805]: I1216 12:08:52.309496 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh" event={"ID":"31cb3421-4893-45b4-bb8d-8afd77fe9cb2","Type":"ContainerStarted","Data":"b75ca15f9c0090b0992cf18ade87d1765023a9b42965c0d2c7c4d61d17ee11fb"} Dec 16 12:08:52 crc kubenswrapper[4805]: I1216 12:08:52.328803 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" podStartSLOduration=3.890259516 podStartE2EDuration="7.328778909s" podCreationTimestamp="2025-12-16 12:08:45 +0000 UTC" firstStartedPulling="2025-12-16 12:08:48.045397317 +0000 UTC m=+801.763655122" lastFinishedPulling="2025-12-16 12:08:51.48391671 +0000 UTC m=+805.202174515" observedRunningTime="2025-12-16 12:08:52.321952964 +0000 UTC m=+806.040210769" watchObservedRunningTime="2025-12-16 12:08:52.328778909 +0000 UTC m=+806.047036734" Dec 16 12:08:53 crc kubenswrapper[4805]: I1216 12:08:53.316658 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" event={"ID":"8fbc1c7a-286e-4428-a745-32211779781e","Type":"ContainerStarted","Data":"4384f3093dbbe559daabdbb8e8ff05327a5bc537ca8cbaf4145781866879a890"} Dec 16 12:08:53 crc kubenswrapper[4805]: I1216 12:08:53.330984 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lxsrn" podStartSLOduration=2.775152524 podStartE2EDuration="8.330963983s" podCreationTimestamp="2025-12-16 12:08:45 +0000 UTC" firstStartedPulling="2025-12-16 12:08:45.92387709 +0000 UTC m=+799.642134895" lastFinishedPulling="2025-12-16 12:08:51.479688529 +0000 UTC m=+805.197946354" observedRunningTime="2025-12-16 12:08:52.343613675 +0000 UTC m=+806.061871490" watchObservedRunningTime="2025-12-16 12:08:53.330963983 +0000 UTC m=+807.049221808" Dec 16 12:08:53 crc kubenswrapper[4805]: I1216 12:08:53.332231 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-8v4hk" podStartSLOduration=3.839361396 podStartE2EDuration="8.332221679s" podCreationTimestamp="2025-12-16 12:08:45 +0000 UTC" firstStartedPulling="2025-12-16 12:08:48.1124341 +0000 UTC m=+801.830691905" lastFinishedPulling="2025-12-16 12:08:52.605294383 +0000 UTC m=+806.323552188" observedRunningTime="2025-12-16 12:08:53.329786089 +0000 UTC m=+807.048043894" watchObservedRunningTime="2025-12-16 12:08:53.332221679 +0000 UTC m=+807.050479494" Dec 16 12:08:55 crc kubenswrapper[4805]: I1216 12:08:55.328282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh" event={"ID":"31cb3421-4893-45b4-bb8d-8afd77fe9cb2","Type":"ContainerStarted","Data":"66d292bc4db5c68867d0d97eadec139d24b8237e721d491b2ac351890e3f463d"} Dec 16 12:08:55 crc kubenswrapper[4805]: I1216 12:08:55.347851 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-z2kdh" podStartSLOduration=1.532727579 podStartE2EDuration="10.347834497s" podCreationTimestamp="2025-12-16 12:08:45 +0000 UTC" firstStartedPulling="2025-12-16 12:08:46.125075313 +0000 UTC m=+799.843333118" lastFinishedPulling="2025-12-16 12:08:54.940182231 +0000 UTC m=+808.658440036" observedRunningTime="2025-12-16 12:08:55.345986604 +0000 UTC m=+809.064244429" watchObservedRunningTime="2025-12-16 12:08:55.347834497 +0000 UTC m=+809.066092322" Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.071511 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.072805 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.072944 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.073658 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92251edb6d76688dd19f1388e5267454cafee90ea7ff5bb9b248b2631cfd26c8"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.073806 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://92251edb6d76688dd19f1388e5267454cafee90ea7ff5bb9b248b2631cfd26c8" gracePeriod=600 Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.478558 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="92251edb6d76688dd19f1388e5267454cafee90ea7ff5bb9b248b2631cfd26c8" exitCode=0 Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.478614 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"92251edb6d76688dd19f1388e5267454cafee90ea7ff5bb9b248b2631cfd26c8"} Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.478647 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"d3092c2527ab0dcd7a9a9a61613b2265defab594c8cd9fbda9395f115e5c0fc6"} Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.478681 4805 scope.go:117] "RemoveContainer" containerID="53bfcd9e55d02e7133c707fa2852d5ae127e10c11b775a7d5cb91d28e2238a69" Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.745668 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.745982 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:57 crc kubenswrapper[4805]: I1216 12:08:57.751874 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:58 crc kubenswrapper[4805]: I1216 12:08:58.493887 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5ffd7578b8-ljflf" Dec 16 12:08:58 crc kubenswrapper[4805]: I1216 12:08:58.546969 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dxhld"] Dec 16 12:09:00 crc kubenswrapper[4805]: I1216 12:09:00.925887 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lxsrn" Dec 16 12:09:06 crc kubenswrapper[4805]: I1216 12:09:06.442188 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-m8h9v" Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.735058 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx"] Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.736910 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.741863 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.744852 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx"] Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.820170 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.820248 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4grsg\" (UniqueName: \"kubernetes.io/projected/fd931a62-7c95-4999-8a4f-5c57209ea44f-kube-api-access-4grsg\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.820271 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.921497 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.921555 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4grsg\" (UniqueName: \"kubernetes.io/projected/fd931a62-7c95-4999-8a4f-5c57209ea44f-kube-api-access-4grsg\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.921587 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.922169 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.922219 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:19 crc kubenswrapper[4805]: I1216 12:09:19.940849 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4grsg\" (UniqueName: \"kubernetes.io/projected/fd931a62-7c95-4999-8a4f-5c57209ea44f-kube-api-access-4grsg\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:20 crc kubenswrapper[4805]: I1216 12:09:20.052465 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:20 crc kubenswrapper[4805]: I1216 12:09:20.470883 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx"] Dec 16 12:09:20 crc kubenswrapper[4805]: I1216 12:09:20.627238 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" event={"ID":"fd931a62-7c95-4999-8a4f-5c57209ea44f","Type":"ContainerStarted","Data":"4bf4361b765a0fe01c8a5b04c6162242edd541c382b508c02fd8885cd15d00fe"} Dec 16 12:09:21 crc kubenswrapper[4805]: I1216 12:09:21.632780 4805 generic.go:334] "Generic (PLEG): container finished" podID="fd931a62-7c95-4999-8a4f-5c57209ea44f" containerID="8cc93b50e58366833385aeb9564ebda59948b7426ccb4d630a7f112f220692bc" exitCode=0 Dec 16 12:09:21 crc kubenswrapper[4805]: I1216 12:09:21.632831 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" event={"ID":"fd931a62-7c95-4999-8a4f-5c57209ea44f","Type":"ContainerDied","Data":"8cc93b50e58366833385aeb9564ebda59948b7426ccb4d630a7f112f220692bc"} Dec 16 12:09:23 crc kubenswrapper[4805]: I1216 12:09:23.605856 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-dxhld" podUID="69858fb6-30f7-4b9b-a240-c95b4aa2de5a" containerName="console" containerID="cri-o://0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47" gracePeriod=15 Dec 16 12:09:23 crc kubenswrapper[4805]: I1216 12:09:23.647123 4805 generic.go:334] "Generic (PLEG): container finished" podID="fd931a62-7c95-4999-8a4f-5c57209ea44f" containerID="35b3c9ef951abfb4b73afe28a369e03e9c8b86a58e67befbaea992453e3412c5" exitCode=0 Dec 16 12:09:23 crc kubenswrapper[4805]: I1216 12:09:23.647176 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" event={"ID":"fd931a62-7c95-4999-8a4f-5c57209ea44f","Type":"ContainerDied","Data":"35b3c9ef951abfb4b73afe28a369e03e9c8b86a58e67befbaea992453e3412c5"} Dec 16 12:09:23 crc kubenswrapper[4805]: I1216 12:09:23.980448 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dxhld_69858fb6-30f7-4b9b-a240-c95b4aa2de5a/console/0.log" Dec 16 12:09:23 crc kubenswrapper[4805]: I1216 12:09:23.980793 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dxhld" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.076132 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76cw7\" (UniqueName: \"kubernetes.io/projected/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-kube-api-access-76cw7\") pod \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.076196 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-service-ca\") pod \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.076223 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-serving-cert\") pod \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.076247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-oauth-config\") pod \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.076267 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-config\") pod \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.076292 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-trusted-ca-bundle\") pod \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.076322 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-oauth-serving-cert\") pod \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\" (UID: \"69858fb6-30f7-4b9b-a240-c95b4aa2de5a\") " Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.076944 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-service-ca" (OuterVolumeSpecName: "service-ca") pod "69858fb6-30f7-4b9b-a240-c95b4aa2de5a" (UID: "69858fb6-30f7-4b9b-a240-c95b4aa2de5a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.077031 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "69858fb6-30f7-4b9b-a240-c95b4aa2de5a" (UID: "69858fb6-30f7-4b9b-a240-c95b4aa2de5a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.077092 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "69858fb6-30f7-4b9b-a240-c95b4aa2de5a" (UID: "69858fb6-30f7-4b9b-a240-c95b4aa2de5a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.077127 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-config" (OuterVolumeSpecName: "console-config") pod "69858fb6-30f7-4b9b-a240-c95b4aa2de5a" (UID: "69858fb6-30f7-4b9b-a240-c95b4aa2de5a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.082216 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-kube-api-access-76cw7" (OuterVolumeSpecName: "kube-api-access-76cw7") pod "69858fb6-30f7-4b9b-a240-c95b4aa2de5a" (UID: "69858fb6-30f7-4b9b-a240-c95b4aa2de5a"). InnerVolumeSpecName "kube-api-access-76cw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.082425 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "69858fb6-30f7-4b9b-a240-c95b4aa2de5a" (UID: "69858fb6-30f7-4b9b-a240-c95b4aa2de5a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.084195 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "69858fb6-30f7-4b9b-a240-c95b4aa2de5a" (UID: "69858fb6-30f7-4b9b-a240-c95b4aa2de5a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.177953 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.178035 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76cw7\" (UniqueName: \"kubernetes.io/projected/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-kube-api-access-76cw7\") on node \"crc\" DevicePath \"\"" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.178055 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-service-ca\") on node \"crc\" DevicePath \"\"" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.178066 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.178076 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.178084 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-console-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.178093 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69858fb6-30f7-4b9b-a240-c95b4aa2de5a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.655664 4805 generic.go:334] "Generic (PLEG): container finished" podID="fd931a62-7c95-4999-8a4f-5c57209ea44f" containerID="d2ab14ad017a3c58ac4b86f9b3b9563d4cda01b08be51eb24e9ac08bf7ef7e14" exitCode=0 Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.655725 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" event={"ID":"fd931a62-7c95-4999-8a4f-5c57209ea44f","Type":"ContainerDied","Data":"d2ab14ad017a3c58ac4b86f9b3b9563d4cda01b08be51eb24e9ac08bf7ef7e14"} Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.658807 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dxhld_69858fb6-30f7-4b9b-a240-c95b4aa2de5a/console/0.log" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.658852 4805 generic.go:334] "Generic (PLEG): container finished" podID="69858fb6-30f7-4b9b-a240-c95b4aa2de5a" containerID="0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47" exitCode=2 Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.658878 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dxhld" event={"ID":"69858fb6-30f7-4b9b-a240-c95b4aa2de5a","Type":"ContainerDied","Data":"0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47"} Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.658900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dxhld" event={"ID":"69858fb6-30f7-4b9b-a240-c95b4aa2de5a","Type":"ContainerDied","Data":"9d611048d7abc6e42c55631ceeb86f6a0a5d02ed29298504e96d41d94d0bbaac"} Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.658916 4805 scope.go:117] "RemoveContainer" containerID="0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.659029 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dxhld" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.673346 4805 scope.go:117] "RemoveContainer" containerID="0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47" Dec 16 12:09:24 crc kubenswrapper[4805]: E1216 12:09:24.675369 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47\": container with ID starting with 0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47 not found: ID does not exist" containerID="0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.675414 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47"} err="failed to get container status \"0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47\": rpc error: code = NotFound desc = could not find container \"0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47\": container with ID starting with 0e134b10a785328cf0e95480bf791cc79a74548d053590cf4e7d8ded24cc6a47 not found: ID does not exist" Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.689329 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dxhld"] Dec 16 12:09:24 crc kubenswrapper[4805]: I1216 12:09:24.694694 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-dxhld"] Dec 16 12:09:25 crc kubenswrapper[4805]: I1216 12:09:25.888628 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:25 crc kubenswrapper[4805]: I1216 12:09:25.901707 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4grsg\" (UniqueName: \"kubernetes.io/projected/fd931a62-7c95-4999-8a4f-5c57209ea44f-kube-api-access-4grsg\") pod \"fd931a62-7c95-4999-8a4f-5c57209ea44f\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " Dec 16 12:09:25 crc kubenswrapper[4805]: I1216 12:09:25.901757 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-bundle\") pod \"fd931a62-7c95-4999-8a4f-5c57209ea44f\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " Dec 16 12:09:25 crc kubenswrapper[4805]: I1216 12:09:25.901882 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-util\") pod \"fd931a62-7c95-4999-8a4f-5c57209ea44f\" (UID: \"fd931a62-7c95-4999-8a4f-5c57209ea44f\") " Dec 16 12:09:25 crc kubenswrapper[4805]: I1216 12:09:25.904532 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-bundle" (OuterVolumeSpecName: "bundle") pod "fd931a62-7c95-4999-8a4f-5c57209ea44f" (UID: "fd931a62-7c95-4999-8a4f-5c57209ea44f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:09:25 crc kubenswrapper[4805]: I1216 12:09:25.919483 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd931a62-7c95-4999-8a4f-5c57209ea44f-kube-api-access-4grsg" (OuterVolumeSpecName: "kube-api-access-4grsg") pod "fd931a62-7c95-4999-8a4f-5c57209ea44f" (UID: "fd931a62-7c95-4999-8a4f-5c57209ea44f"). InnerVolumeSpecName "kube-api-access-4grsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:09:26 crc kubenswrapper[4805]: I1216 12:09:26.003235 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4grsg\" (UniqueName: \"kubernetes.io/projected/fd931a62-7c95-4999-8a4f-5c57209ea44f-kube-api-access-4grsg\") on node \"crc\" DevicePath \"\"" Dec 16 12:09:26 crc kubenswrapper[4805]: I1216 12:09:26.003271 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:09:26 crc kubenswrapper[4805]: I1216 12:09:26.203390 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-util" (OuterVolumeSpecName: "util") pod "fd931a62-7c95-4999-8a4f-5c57209ea44f" (UID: "fd931a62-7c95-4999-8a4f-5c57209ea44f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:09:26 crc kubenswrapper[4805]: I1216 12:09:26.205533 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd931a62-7c95-4999-8a4f-5c57209ea44f-util\") on node \"crc\" DevicePath \"\"" Dec 16 12:09:26 crc kubenswrapper[4805]: I1216 12:09:26.532693 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69858fb6-30f7-4b9b-a240-c95b4aa2de5a" path="/var/lib/kubelet/pods/69858fb6-30f7-4b9b-a240-c95b4aa2de5a/volumes" Dec 16 12:09:26 crc kubenswrapper[4805]: I1216 12:09:26.676747 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" event={"ID":"fd931a62-7c95-4999-8a4f-5c57209ea44f","Type":"ContainerDied","Data":"4bf4361b765a0fe01c8a5b04c6162242edd541c382b508c02fd8885cd15d00fe"} Dec 16 12:09:26 crc kubenswrapper[4805]: I1216 12:09:26.676801 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bf4361b765a0fe01c8a5b04c6162242edd541c382b508c02fd8885cd15d00fe" Dec 16 12:09:26 crc kubenswrapper[4805]: I1216 12:09:26.676830 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.014228 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f"] Dec 16 12:09:35 crc kubenswrapper[4805]: E1216 12:09:35.014935 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd931a62-7c95-4999-8a4f-5c57209ea44f" containerName="extract" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.014948 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd931a62-7c95-4999-8a4f-5c57209ea44f" containerName="extract" Dec 16 12:09:35 crc kubenswrapper[4805]: E1216 12:09:35.014966 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd931a62-7c95-4999-8a4f-5c57209ea44f" containerName="pull" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.014972 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd931a62-7c95-4999-8a4f-5c57209ea44f" containerName="pull" Dec 16 12:09:35 crc kubenswrapper[4805]: E1216 12:09:35.014980 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd931a62-7c95-4999-8a4f-5c57209ea44f" containerName="util" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.014986 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd931a62-7c95-4999-8a4f-5c57209ea44f" containerName="util" Dec 16 12:09:35 crc kubenswrapper[4805]: E1216 12:09:35.014996 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69858fb6-30f7-4b9b-a240-c95b4aa2de5a" containerName="console" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.015002 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="69858fb6-30f7-4b9b-a240-c95b4aa2de5a" containerName="console" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.015092 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd931a62-7c95-4999-8a4f-5c57209ea44f" containerName="extract" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.015104 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="69858fb6-30f7-4b9b-a240-c95b4aa2de5a" containerName="console" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.015512 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.019882 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-sqxgk" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.020328 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.020842 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.021035 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.022545 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.038852 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f"] Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.111604 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2rzk\" (UniqueName: \"kubernetes.io/projected/e7312814-4835-454a-b7c4-5036ce21ef36-kube-api-access-w2rzk\") pod \"metallb-operator-controller-manager-596ddb8dd6-lh97f\" (UID: \"e7312814-4835-454a-b7c4-5036ce21ef36\") " pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.111927 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7312814-4835-454a-b7c4-5036ce21ef36-webhook-cert\") pod \"metallb-operator-controller-manager-596ddb8dd6-lh97f\" (UID: \"e7312814-4835-454a-b7c4-5036ce21ef36\") " pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.112055 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7312814-4835-454a-b7c4-5036ce21ef36-apiservice-cert\") pod \"metallb-operator-controller-manager-596ddb8dd6-lh97f\" (UID: \"e7312814-4835-454a-b7c4-5036ce21ef36\") " pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.213054 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7312814-4835-454a-b7c4-5036ce21ef36-webhook-cert\") pod \"metallb-operator-controller-manager-596ddb8dd6-lh97f\" (UID: \"e7312814-4835-454a-b7c4-5036ce21ef36\") " pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.213341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7312814-4835-454a-b7c4-5036ce21ef36-apiservice-cert\") pod \"metallb-operator-controller-manager-596ddb8dd6-lh97f\" (UID: \"e7312814-4835-454a-b7c4-5036ce21ef36\") " pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.213500 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2rzk\" (UniqueName: \"kubernetes.io/projected/e7312814-4835-454a-b7c4-5036ce21ef36-kube-api-access-w2rzk\") pod \"metallb-operator-controller-manager-596ddb8dd6-lh97f\" (UID: \"e7312814-4835-454a-b7c4-5036ce21ef36\") " pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.219968 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7312814-4835-454a-b7c4-5036ce21ef36-apiservice-cert\") pod \"metallb-operator-controller-manager-596ddb8dd6-lh97f\" (UID: \"e7312814-4835-454a-b7c4-5036ce21ef36\") " pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.220059 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7312814-4835-454a-b7c4-5036ce21ef36-webhook-cert\") pod \"metallb-operator-controller-manager-596ddb8dd6-lh97f\" (UID: \"e7312814-4835-454a-b7c4-5036ce21ef36\") " pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.268057 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2rzk\" (UniqueName: \"kubernetes.io/projected/e7312814-4835-454a-b7c4-5036ce21ef36-kube-api-access-w2rzk\") pod \"metallb-operator-controller-manager-596ddb8dd6-lh97f\" (UID: \"e7312814-4835-454a-b7c4-5036ce21ef36\") " pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.337050 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.414212 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj"] Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.415281 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.417837 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-j9bwd" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.420018 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.423058 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.459013 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj"] Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.516241 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhbv\" (UniqueName: \"kubernetes.io/projected/1f6983ab-d0c5-4431-878f-86c4d91d6720-kube-api-access-9mhbv\") pod \"metallb-operator-webhook-server-74df5c45cc-dv7cj\" (UID: \"1f6983ab-d0c5-4431-878f-86c4d91d6720\") " pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.516291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f6983ab-d0c5-4431-878f-86c4d91d6720-apiservice-cert\") pod \"metallb-operator-webhook-server-74df5c45cc-dv7cj\" (UID: \"1f6983ab-d0c5-4431-878f-86c4d91d6720\") " pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.516436 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f6983ab-d0c5-4431-878f-86c4d91d6720-webhook-cert\") pod \"metallb-operator-webhook-server-74df5c45cc-dv7cj\" (UID: \"1f6983ab-d0c5-4431-878f-86c4d91d6720\") " pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.617047 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f6983ab-d0c5-4431-878f-86c4d91d6720-webhook-cert\") pod \"metallb-operator-webhook-server-74df5c45cc-dv7cj\" (UID: \"1f6983ab-d0c5-4431-878f-86c4d91d6720\") " pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.617158 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mhbv\" (UniqueName: \"kubernetes.io/projected/1f6983ab-d0c5-4431-878f-86c4d91d6720-kube-api-access-9mhbv\") pod \"metallb-operator-webhook-server-74df5c45cc-dv7cj\" (UID: \"1f6983ab-d0c5-4431-878f-86c4d91d6720\") " pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.617201 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f6983ab-d0c5-4431-878f-86c4d91d6720-apiservice-cert\") pod \"metallb-operator-webhook-server-74df5c45cc-dv7cj\" (UID: \"1f6983ab-d0c5-4431-878f-86c4d91d6720\") " pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.624226 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f6983ab-d0c5-4431-878f-86c4d91d6720-webhook-cert\") pod \"metallb-operator-webhook-server-74df5c45cc-dv7cj\" (UID: \"1f6983ab-d0c5-4431-878f-86c4d91d6720\") " pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.625735 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f6983ab-d0c5-4431-878f-86c4d91d6720-apiservice-cert\") pod \"metallb-operator-webhook-server-74df5c45cc-dv7cj\" (UID: \"1f6983ab-d0c5-4431-878f-86c4d91d6720\") " pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.701483 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mhbv\" (UniqueName: \"kubernetes.io/projected/1f6983ab-d0c5-4431-878f-86c4d91d6720-kube-api-access-9mhbv\") pod \"metallb-operator-webhook-server-74df5c45cc-dv7cj\" (UID: \"1f6983ab-d0c5-4431-878f-86c4d91d6720\") " pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.731084 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:35 crc kubenswrapper[4805]: I1216 12:09:35.833904 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f"] Dec 16 12:09:35 crc kubenswrapper[4805]: W1216 12:09:35.843590 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7312814_4835_454a_b7c4_5036ce21ef36.slice/crio-5bc064d402134cbef923e79e416b8c5aaf42f7ff777f1e2ab9b2f620b3603663 WatchSource:0}: Error finding container 5bc064d402134cbef923e79e416b8c5aaf42f7ff777f1e2ab9b2f620b3603663: Status 404 returned error can't find the container with id 5bc064d402134cbef923e79e416b8c5aaf42f7ff777f1e2ab9b2f620b3603663 Dec 16 12:09:36 crc kubenswrapper[4805]: I1216 12:09:36.118800 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj"] Dec 16 12:09:36 crc kubenswrapper[4805]: I1216 12:09:36.797212 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" event={"ID":"1f6983ab-d0c5-4431-878f-86c4d91d6720","Type":"ContainerStarted","Data":"e64160d995ca105433128fc45fe7337272c40da1a300b776373dff21d75ee906"} Dec 16 12:09:36 crc kubenswrapper[4805]: I1216 12:09:36.799245 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" event={"ID":"e7312814-4835-454a-b7c4-5036ce21ef36","Type":"ContainerStarted","Data":"5bc064d402134cbef923e79e416b8c5aaf42f7ff777f1e2ab9b2f620b3603663"} Dec 16 12:09:39 crc kubenswrapper[4805]: I1216 12:09:39.822491 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" event={"ID":"e7312814-4835-454a-b7c4-5036ce21ef36","Type":"ContainerStarted","Data":"479565e6dc02e2359ae56b4102a856c88240d9babfffab1ea2202fc9611b6037"} Dec 16 12:09:39 crc kubenswrapper[4805]: I1216 12:09:39.823280 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:09:39 crc kubenswrapper[4805]: I1216 12:09:39.858549 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" podStartSLOduration=2.423714419 podStartE2EDuration="5.858523488s" podCreationTimestamp="2025-12-16 12:09:34 +0000 UTC" firstStartedPulling="2025-12-16 12:09:35.85290169 +0000 UTC m=+849.571159495" lastFinishedPulling="2025-12-16 12:09:39.287710759 +0000 UTC m=+853.005968564" observedRunningTime="2025-12-16 12:09:39.853253567 +0000 UTC m=+853.571511382" watchObservedRunningTime="2025-12-16 12:09:39.858523488 +0000 UTC m=+853.576781303" Dec 16 12:09:41 crc kubenswrapper[4805]: I1216 12:09:41.842854 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" event={"ID":"1f6983ab-d0c5-4431-878f-86c4d91d6720","Type":"ContainerStarted","Data":"893f107359000b20dbda43e10b161495215e6ab55fb6d6a18b202b0a0ae9c0e5"} Dec 16 12:09:41 crc kubenswrapper[4805]: I1216 12:09:41.843307 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:09:41 crc kubenswrapper[4805]: I1216 12:09:41.867700 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" podStartSLOduration=1.7017011819999999 podStartE2EDuration="6.867677563s" podCreationTimestamp="2025-12-16 12:09:35 +0000 UTC" firstStartedPulling="2025-12-16 12:09:36.127222488 +0000 UTC m=+849.845480303" lastFinishedPulling="2025-12-16 12:09:41.293198879 +0000 UTC m=+855.011456684" observedRunningTime="2025-12-16 12:09:41.860055645 +0000 UTC m=+855.578313460" watchObservedRunningTime="2025-12-16 12:09:41.867677563 +0000 UTC m=+855.585935378" Dec 16 12:09:55 crc kubenswrapper[4805]: I1216 12:09:55.734544 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-74df5c45cc-dv7cj" Dec 16 12:10:15 crc kubenswrapper[4805]: I1216 12:10:15.339565 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-596ddb8dd6-lh97f" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.506326 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-cq86h"] Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.509359 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.511171 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj"] Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.511980 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.514992 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.515404 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.515407 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8v5mq" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.515470 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.532575 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj"] Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.552131 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-reloader\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.552192 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c0c0377d-ee53-45d7-87be-8f5ba37280b3-frr-startup\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.552270 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgv7p\" (UniqueName: \"kubernetes.io/projected/a7792c7c-66d6-4c58-ba5f-09ddbb883c20-kube-api-access-zgv7p\") pod \"frr-k8s-webhook-server-7784b6fcf-zqndj\" (UID: \"a7792c7c-66d6-4c58-ba5f-09ddbb883c20\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.552318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-metrics\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.552341 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-frr-sockets\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.552368 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w5t9\" (UniqueName: \"kubernetes.io/projected/c0c0377d-ee53-45d7-87be-8f5ba37280b3-kube-api-access-2w5t9\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.552502 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7792c7c-66d6-4c58-ba5f-09ddbb883c20-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-zqndj\" (UID: \"a7792c7c-66d6-4c58-ba5f-09ddbb883c20\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.553074 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-frr-conf\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.553112 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0c0377d-ee53-45d7-87be-8f5ba37280b3-metrics-certs\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.633836 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-zjknk"] Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.634884 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zjknk" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.638410 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.639053 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.639577 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.639838 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-j5hxn" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654113 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-reloader\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654259 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c0c0377d-ee53-45d7-87be-8f5ba37280b3-frr-startup\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654304 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgv7p\" (UniqueName: \"kubernetes.io/projected/a7792c7c-66d6-4c58-ba5f-09ddbb883c20-kube-api-access-zgv7p\") pod \"frr-k8s-webhook-server-7784b6fcf-zqndj\" (UID: \"a7792c7c-66d6-4c58-ba5f-09ddbb883c20\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654344 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-metrics\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654367 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-frr-sockets\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654397 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/89665fca-66d5-4ff5-98d9-e49065febb40-metallb-excludel2\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w5t9\" (UniqueName: \"kubernetes.io/projected/c0c0377d-ee53-45d7-87be-8f5ba37280b3-kube-api-access-2w5t9\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654464 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7792c7c-66d6-4c58-ba5f-09ddbb883c20-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-zqndj\" (UID: \"a7792c7c-66d6-4c58-ba5f-09ddbb883c20\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-memberlist\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654532 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvfqn\" (UniqueName: \"kubernetes.io/projected/89665fca-66d5-4ff5-98d9-e49065febb40-kube-api-access-kvfqn\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654580 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-frr-conf\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654601 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0c0377d-ee53-45d7-87be-8f5ba37280b3-metrics-certs\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654634 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-metrics-certs\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.654979 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-frr-sockets\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: E1216 12:10:16.655044 4805 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 16 12:10:16 crc kubenswrapper[4805]: E1216 12:10:16.655091 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7792c7c-66d6-4c58-ba5f-09ddbb883c20-cert podName:a7792c7c-66d6-4c58-ba5f-09ddbb883c20 nodeName:}" failed. No retries permitted until 2025-12-16 12:10:17.155074563 +0000 UTC m=+890.873332358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a7792c7c-66d6-4c58-ba5f-09ddbb883c20-cert") pod "frr-k8s-webhook-server-7784b6fcf-zqndj" (UID: "a7792c7c-66d6-4c58-ba5f-09ddbb883c20") : secret "frr-k8s-webhook-server-cert" not found Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.655235 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-reloader\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.655585 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-frr-conf\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.656127 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c0c0377d-ee53-45d7-87be-8f5ba37280b3-frr-startup\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.656424 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c0c0377d-ee53-45d7-87be-8f5ba37280b3-metrics\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.662943 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0c0377d-ee53-45d7-87be-8f5ba37280b3-metrics-certs\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.682605 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-mw2xw"] Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.683800 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.687800 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.701418 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgv7p\" (UniqueName: \"kubernetes.io/projected/a7792c7c-66d6-4c58-ba5f-09ddbb883c20-kube-api-access-zgv7p\") pod \"frr-k8s-webhook-server-7784b6fcf-zqndj\" (UID: \"a7792c7c-66d6-4c58-ba5f-09ddbb883c20\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.708740 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w5t9\" (UniqueName: \"kubernetes.io/projected/c0c0377d-ee53-45d7-87be-8f5ba37280b3-kube-api-access-2w5t9\") pod \"frr-k8s-cq86h\" (UID: \"c0c0377d-ee53-45d7-87be-8f5ba37280b3\") " pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.710753 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-mw2xw"] Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.776104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/601c7e3b-b663-4780-bb6a-59bc7e4d510d-metrics-certs\") pod \"controller-5bddd4b946-mw2xw\" (UID: \"601c7e3b-b663-4780-bb6a-59bc7e4d510d\") " pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.776216 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/89665fca-66d5-4ff5-98d9-e49065febb40-metallb-excludel2\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.776291 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-memberlist\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.776318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/601c7e3b-b663-4780-bb6a-59bc7e4d510d-cert\") pod \"controller-5bddd4b946-mw2xw\" (UID: \"601c7e3b-b663-4780-bb6a-59bc7e4d510d\") " pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.776359 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vn6x\" (UniqueName: \"kubernetes.io/projected/601c7e3b-b663-4780-bb6a-59bc7e4d510d-kube-api-access-8vn6x\") pod \"controller-5bddd4b946-mw2xw\" (UID: \"601c7e3b-b663-4780-bb6a-59bc7e4d510d\") " pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.776390 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvfqn\" (UniqueName: \"kubernetes.io/projected/89665fca-66d5-4ff5-98d9-e49065febb40-kube-api-access-kvfqn\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.776451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-metrics-certs\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:16 crc kubenswrapper[4805]: E1216 12:10:16.776599 4805 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 16 12:10:16 crc kubenswrapper[4805]: E1216 12:10:16.776658 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-metrics-certs podName:89665fca-66d5-4ff5-98d9-e49065febb40 nodeName:}" failed. No retries permitted until 2025-12-16 12:10:17.27664076 +0000 UTC m=+890.994898565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-metrics-certs") pod "speaker-zjknk" (UID: "89665fca-66d5-4ff5-98d9-e49065febb40") : secret "speaker-certs-secret" not found Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.777744 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/89665fca-66d5-4ff5-98d9-e49065febb40-metallb-excludel2\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:16 crc kubenswrapper[4805]: E1216 12:10:16.777850 4805 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 16 12:10:16 crc kubenswrapper[4805]: E1216 12:10:16.777897 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-memberlist podName:89665fca-66d5-4ff5-98d9-e49065febb40 nodeName:}" failed. No retries permitted until 2025-12-16 12:10:17.277877675 +0000 UTC m=+890.996135480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-memberlist") pod "speaker-zjknk" (UID: "89665fca-66d5-4ff5-98d9-e49065febb40") : secret "metallb-memberlist" not found Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.799653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvfqn\" (UniqueName: \"kubernetes.io/projected/89665fca-66d5-4ff5-98d9-e49065febb40-kube-api-access-kvfqn\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.833045 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.877651 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/601c7e3b-b663-4780-bb6a-59bc7e4d510d-cert\") pod \"controller-5bddd4b946-mw2xw\" (UID: \"601c7e3b-b663-4780-bb6a-59bc7e4d510d\") " pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.877710 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vn6x\" (UniqueName: \"kubernetes.io/projected/601c7e3b-b663-4780-bb6a-59bc7e4d510d-kube-api-access-8vn6x\") pod \"controller-5bddd4b946-mw2xw\" (UID: \"601c7e3b-b663-4780-bb6a-59bc7e4d510d\") " pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.877783 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/601c7e3b-b663-4780-bb6a-59bc7e4d510d-metrics-certs\") pod \"controller-5bddd4b946-mw2xw\" (UID: \"601c7e3b-b663-4780-bb6a-59bc7e4d510d\") " pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:16 crc kubenswrapper[4805]: E1216 12:10:16.877911 4805 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 16 12:10:16 crc kubenswrapper[4805]: E1216 12:10:16.877963 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/601c7e3b-b663-4780-bb6a-59bc7e4d510d-metrics-certs podName:601c7e3b-b663-4780-bb6a-59bc7e4d510d nodeName:}" failed. No retries permitted until 2025-12-16 12:10:17.377946338 +0000 UTC m=+891.096204143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/601c7e3b-b663-4780-bb6a-59bc7e4d510d-metrics-certs") pod "controller-5bddd4b946-mw2xw" (UID: "601c7e3b-b663-4780-bb6a-59bc7e4d510d") : secret "controller-certs-secret" not found Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.896514 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/601c7e3b-b663-4780-bb6a-59bc7e4d510d-cert\") pod \"controller-5bddd4b946-mw2xw\" (UID: \"601c7e3b-b663-4780-bb6a-59bc7e4d510d\") " pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:16 crc kubenswrapper[4805]: I1216 12:10:16.910533 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vn6x\" (UniqueName: \"kubernetes.io/projected/601c7e3b-b663-4780-bb6a-59bc7e4d510d-kube-api-access-8vn6x\") pod \"controller-5bddd4b946-mw2xw\" (UID: \"601c7e3b-b663-4780-bb6a-59bc7e4d510d\") " pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:17 crc kubenswrapper[4805]: I1216 12:10:17.062059 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq86h" event={"ID":"c0c0377d-ee53-45d7-87be-8f5ba37280b3","Type":"ContainerStarted","Data":"aefb10c8cb923a1ce5b7e64fc7bb97fcb181169f22c6c2c59db4c5125d672517"} Dec 16 12:10:17 crc kubenswrapper[4805]: I1216 12:10:17.183780 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7792c7c-66d6-4c58-ba5f-09ddbb883c20-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-zqndj\" (UID: \"a7792c7c-66d6-4c58-ba5f-09ddbb883c20\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" Dec 16 12:10:17 crc kubenswrapper[4805]: I1216 12:10:17.188833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7792c7c-66d6-4c58-ba5f-09ddbb883c20-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-zqndj\" (UID: \"a7792c7c-66d6-4c58-ba5f-09ddbb883c20\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" Dec 16 12:10:17 crc kubenswrapper[4805]: I1216 12:10:17.284835 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-metrics-certs\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:17 crc kubenswrapper[4805]: I1216 12:10:17.284959 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-memberlist\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:17 crc kubenswrapper[4805]: E1216 12:10:17.285104 4805 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 16 12:10:17 crc kubenswrapper[4805]: E1216 12:10:17.285213 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-memberlist podName:89665fca-66d5-4ff5-98d9-e49065febb40 nodeName:}" failed. No retries permitted until 2025-12-16 12:10:18.285192588 +0000 UTC m=+892.003450453 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-memberlist") pod "speaker-zjknk" (UID: "89665fca-66d5-4ff5-98d9-e49065febb40") : secret "metallb-memberlist" not found Dec 16 12:10:17 crc kubenswrapper[4805]: I1216 12:10:17.290769 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-metrics-certs\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:17 crc kubenswrapper[4805]: I1216 12:10:17.386034 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/601c7e3b-b663-4780-bb6a-59bc7e4d510d-metrics-certs\") pod \"controller-5bddd4b946-mw2xw\" (UID: \"601c7e3b-b663-4780-bb6a-59bc7e4d510d\") " pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:17 crc kubenswrapper[4805]: I1216 12:10:17.388888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/601c7e3b-b663-4780-bb6a-59bc7e4d510d-metrics-certs\") pod \"controller-5bddd4b946-mw2xw\" (UID: \"601c7e3b-b663-4780-bb6a-59bc7e4d510d\") " pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:17 crc kubenswrapper[4805]: I1216 12:10:17.442934 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" Dec 16 12:10:17 crc kubenswrapper[4805]: I1216 12:10:17.489736 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:17 crc kubenswrapper[4805]: I1216 12:10:17.792103 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj"] Dec 16 12:10:18 crc kubenswrapper[4805]: I1216 12:10:18.034885 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-mw2xw"] Dec 16 12:10:18 crc kubenswrapper[4805]: I1216 12:10:18.071424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" event={"ID":"a7792c7c-66d6-4c58-ba5f-09ddbb883c20","Type":"ContainerStarted","Data":"2e4c6e19ca00629ff00df705ab7822ea0388b5984abf4e265d5579fb8d34de6e"} Dec 16 12:10:18 crc kubenswrapper[4805]: I1216 12:10:18.074884 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-mw2xw" event={"ID":"601c7e3b-b663-4780-bb6a-59bc7e4d510d","Type":"ContainerStarted","Data":"2dabd4ce7506a176db7c207e57fa869cc3df88a2e8177ad2e7011fece8fa21d6"} Dec 16 12:10:18 crc kubenswrapper[4805]: I1216 12:10:18.358024 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-memberlist\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:18 crc kubenswrapper[4805]: I1216 12:10:18.370023 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/89665fca-66d5-4ff5-98d9-e49065febb40-memberlist\") pod \"speaker-zjknk\" (UID: \"89665fca-66d5-4ff5-98d9-e49065febb40\") " pod="metallb-system/speaker-zjknk" Dec 16 12:10:18 crc kubenswrapper[4805]: I1216 12:10:18.450316 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zjknk" Dec 16 12:10:19 crc kubenswrapper[4805]: I1216 12:10:19.091771 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zjknk" event={"ID":"89665fca-66d5-4ff5-98d9-e49065febb40","Type":"ContainerStarted","Data":"d32dd8b0d36363f01417389b195c1c3ac9e51af5b12811298885357579b07f0a"} Dec 16 12:10:19 crc kubenswrapper[4805]: I1216 12:10:19.092104 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zjknk" event={"ID":"89665fca-66d5-4ff5-98d9-e49065febb40","Type":"ContainerStarted","Data":"02e89363410946ddad93b6d3bd9dcf00e223677ffd7a7acdbdf584ae4e2f041b"} Dec 16 12:10:19 crc kubenswrapper[4805]: I1216 12:10:19.095016 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-mw2xw" event={"ID":"601c7e3b-b663-4780-bb6a-59bc7e4d510d","Type":"ContainerStarted","Data":"24d15dd5dd265c6247f1f9c451b5d25542291b13ad38cebe0d291ed960ab27f6"} Dec 16 12:10:19 crc kubenswrapper[4805]: I1216 12:10:19.095056 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-mw2xw" event={"ID":"601c7e3b-b663-4780-bb6a-59bc7e4d510d","Type":"ContainerStarted","Data":"d8866f4fa4e865207a83e29f7f2e1dc573a805fe01dc6fd322b55a18e40776f4"} Dec 16 12:10:19 crc kubenswrapper[4805]: I1216 12:10:19.095993 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:19 crc kubenswrapper[4805]: I1216 12:10:19.116675 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-mw2xw" podStartSLOduration=3.116655911 podStartE2EDuration="3.116655911s" podCreationTimestamp="2025-12-16 12:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:10:19.112606565 +0000 UTC m=+892.830864370" watchObservedRunningTime="2025-12-16 12:10:19.116655911 +0000 UTC m=+892.834913736" Dec 16 12:10:20 crc kubenswrapper[4805]: I1216 12:10:20.103004 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zjknk" event={"ID":"89665fca-66d5-4ff5-98d9-e49065febb40","Type":"ContainerStarted","Data":"08fef17f3bccaefaa0ea97c29280bb7f0cf02e2827a69b544890875e33cad110"} Dec 16 12:10:20 crc kubenswrapper[4805]: I1216 12:10:20.129488 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-zjknk" podStartSLOduration=4.1294687230000005 podStartE2EDuration="4.129468723s" podCreationTimestamp="2025-12-16 12:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:10:20.124031318 +0000 UTC m=+893.842289133" watchObservedRunningTime="2025-12-16 12:10:20.129468723 +0000 UTC m=+893.847726538" Dec 16 12:10:21 crc kubenswrapper[4805]: I1216 12:10:21.108683 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-zjknk" Dec 16 12:10:28 crc kubenswrapper[4805]: I1216 12:10:28.453922 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-zjknk" Dec 16 12:10:29 crc kubenswrapper[4805]: I1216 12:10:29.180081 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" event={"ID":"a7792c7c-66d6-4c58-ba5f-09ddbb883c20","Type":"ContainerStarted","Data":"89987451f65842ea03df4187b5886ef3733503be5c4f8554bd2d298eb0a8c2d5"} Dec 16 12:10:29 crc kubenswrapper[4805]: I1216 12:10:29.180419 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" Dec 16 12:10:29 crc kubenswrapper[4805]: I1216 12:10:29.182393 4805 generic.go:334] "Generic (PLEG): container finished" podID="c0c0377d-ee53-45d7-87be-8f5ba37280b3" containerID="fe4c73267bd106bdd66f9b41337b276f95a162996323383fb32aade11f90dacb" exitCode=0 Dec 16 12:10:29 crc kubenswrapper[4805]: I1216 12:10:29.182443 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq86h" event={"ID":"c0c0377d-ee53-45d7-87be-8f5ba37280b3","Type":"ContainerDied","Data":"fe4c73267bd106bdd66f9b41337b276f95a162996323383fb32aade11f90dacb"} Dec 16 12:10:29 crc kubenswrapper[4805]: I1216 12:10:29.199112 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" podStartSLOduration=2.210818578 podStartE2EDuration="13.199093316s" podCreationTimestamp="2025-12-16 12:10:16 +0000 UTC" firstStartedPulling="2025-12-16 12:10:17.800099908 +0000 UTC m=+891.518357713" lastFinishedPulling="2025-12-16 12:10:28.788374646 +0000 UTC m=+902.506632451" observedRunningTime="2025-12-16 12:10:29.194780802 +0000 UTC m=+902.913038607" watchObservedRunningTime="2025-12-16 12:10:29.199093316 +0000 UTC m=+902.917351131" Dec 16 12:10:29 crc kubenswrapper[4805]: E1216 12:10:29.395989 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0c0377d_ee53_45d7_87be_8f5ba37280b3.slice/crio-conmon-6a070f96955b44ca8cba4475cd9790db7b74a8b13b647dc9c7cc99d4a1b873b0.scope\": RecentStats: unable to find data in memory cache]" Dec 16 12:10:30 crc kubenswrapper[4805]: I1216 12:10:30.187984 4805 generic.go:334] "Generic (PLEG): container finished" podID="c0c0377d-ee53-45d7-87be-8f5ba37280b3" containerID="6a070f96955b44ca8cba4475cd9790db7b74a8b13b647dc9c7cc99d4a1b873b0" exitCode=0 Dec 16 12:10:30 crc kubenswrapper[4805]: I1216 12:10:30.188068 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq86h" event={"ID":"c0c0377d-ee53-45d7-87be-8f5ba37280b3","Type":"ContainerDied","Data":"6a070f96955b44ca8cba4475cd9790db7b74a8b13b647dc9c7cc99d4a1b873b0"} Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.196767 4805 generic.go:334] "Generic (PLEG): container finished" podID="c0c0377d-ee53-45d7-87be-8f5ba37280b3" containerID="16a5695a8e3d3b29c429d085f65c7daa0bf6721f47797b6d2b3ec55f798fcf44" exitCode=0 Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.196820 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq86h" event={"ID":"c0c0377d-ee53-45d7-87be-8f5ba37280b3","Type":"ContainerDied","Data":"16a5695a8e3d3b29c429d085f65c7daa0bf6721f47797b6d2b3ec55f798fcf44"} Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.629912 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x2bbm"] Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.630876 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x2bbm" Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.641804 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nwcl9" Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.642104 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.644455 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.647911 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x2bbm"] Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.673815 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86vkk\" (UniqueName: \"kubernetes.io/projected/b6f3b635-2be1-45b6-8d70-74a18697bc67-kube-api-access-86vkk\") pod \"openstack-operator-index-x2bbm\" (UID: \"b6f3b635-2be1-45b6-8d70-74a18697bc67\") " pod="openstack-operators/openstack-operator-index-x2bbm" Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.775044 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86vkk\" (UniqueName: \"kubernetes.io/projected/b6f3b635-2be1-45b6-8d70-74a18697bc67-kube-api-access-86vkk\") pod \"openstack-operator-index-x2bbm\" (UID: \"b6f3b635-2be1-45b6-8d70-74a18697bc67\") " pod="openstack-operators/openstack-operator-index-x2bbm" Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.855464 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86vkk\" (UniqueName: \"kubernetes.io/projected/b6f3b635-2be1-45b6-8d70-74a18697bc67-kube-api-access-86vkk\") pod \"openstack-operator-index-x2bbm\" (UID: \"b6f3b635-2be1-45b6-8d70-74a18697bc67\") " pod="openstack-operators/openstack-operator-index-x2bbm" Dec 16 12:10:31 crc kubenswrapper[4805]: I1216 12:10:31.968853 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x2bbm" Dec 16 12:10:32 crc kubenswrapper[4805]: I1216 12:10:32.215721 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq86h" event={"ID":"c0c0377d-ee53-45d7-87be-8f5ba37280b3","Type":"ContainerStarted","Data":"b039cb5f2e359431682d5d7cacbf90dc6e47421276829a633458c7cd84083772"} Dec 16 12:10:32 crc kubenswrapper[4805]: I1216 12:10:32.215992 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq86h" event={"ID":"c0c0377d-ee53-45d7-87be-8f5ba37280b3","Type":"ContainerStarted","Data":"ed7569aea72fa5fb027db76206630e9e41a3bbf687eb8bd1af1efe60170413a0"} Dec 16 12:10:32 crc kubenswrapper[4805]: I1216 12:10:32.216005 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq86h" event={"ID":"c0c0377d-ee53-45d7-87be-8f5ba37280b3","Type":"ContainerStarted","Data":"3dd89aa753901519b264f1605e6602d5544f339fb171a02eaa9fc8e5c6a21a06"} Dec 16 12:10:32 crc kubenswrapper[4805]: I1216 12:10:32.216016 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq86h" event={"ID":"c0c0377d-ee53-45d7-87be-8f5ba37280b3","Type":"ContainerStarted","Data":"3e0f0fc23d75c2e2ce534ee58adb6a12f1ebb08d03ce7149e04989e441a0bb9e"} Dec 16 12:10:32 crc kubenswrapper[4805]: I1216 12:10:32.261448 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x2bbm"] Dec 16 12:10:33 crc kubenswrapper[4805]: I1216 12:10:33.231914 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq86h" event={"ID":"c0c0377d-ee53-45d7-87be-8f5ba37280b3","Type":"ContainerStarted","Data":"f0219fd4529822caef08a24d66abfc153d56ef3ffb5760b199a9f53cb8e6256d"} Dec 16 12:10:33 crc kubenswrapper[4805]: I1216 12:10:33.233171 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x2bbm" event={"ID":"b6f3b635-2be1-45b6-8d70-74a18697bc67","Type":"ContainerStarted","Data":"41e09dc189eba3fb8bf2391050159c76be3aa1ba2a6307cb1577a97bba81f6ed"} Dec 16 12:10:34 crc kubenswrapper[4805]: I1216 12:10:34.246944 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq86h" event={"ID":"c0c0377d-ee53-45d7-87be-8f5ba37280b3","Type":"ContainerStarted","Data":"14ec97054ee1e0d77ad13bb506851c63e31a1617da79d9abb933a7152e3dc9d0"} Dec 16 12:10:34 crc kubenswrapper[4805]: I1216 12:10:34.248665 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:34 crc kubenswrapper[4805]: I1216 12:10:34.276634 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-cq86h" podStartSLOduration=6.5064494 podStartE2EDuration="18.276616996s" podCreationTimestamp="2025-12-16 12:10:16 +0000 UTC" firstStartedPulling="2025-12-16 12:10:17.042344471 +0000 UTC m=+890.760602276" lastFinishedPulling="2025-12-16 12:10:28.812512057 +0000 UTC m=+902.530769872" observedRunningTime="2025-12-16 12:10:34.272194459 +0000 UTC m=+907.990452264" watchObservedRunningTime="2025-12-16 12:10:34.276616996 +0000 UTC m=+907.994874821" Dec 16 12:10:35 crc kubenswrapper[4805]: I1216 12:10:35.007594 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x2bbm"] Dec 16 12:10:35 crc kubenswrapper[4805]: I1216 12:10:35.616116 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4bhsq"] Dec 16 12:10:35 crc kubenswrapper[4805]: I1216 12:10:35.617935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4bhsq" Dec 16 12:10:35 crc kubenswrapper[4805]: I1216 12:10:35.621315 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4bhsq"] Dec 16 12:10:35 crc kubenswrapper[4805]: I1216 12:10:35.630817 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w287b\" (UniqueName: \"kubernetes.io/projected/42823d10-65ec-407c-93a4-98d27954a5f3-kube-api-access-w287b\") pod \"openstack-operator-index-4bhsq\" (UID: \"42823d10-65ec-407c-93a4-98d27954a5f3\") " pod="openstack-operators/openstack-operator-index-4bhsq" Dec 16 12:10:35 crc kubenswrapper[4805]: I1216 12:10:35.731526 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w287b\" (UniqueName: \"kubernetes.io/projected/42823d10-65ec-407c-93a4-98d27954a5f3-kube-api-access-w287b\") pod \"openstack-operator-index-4bhsq\" (UID: \"42823d10-65ec-407c-93a4-98d27954a5f3\") " pod="openstack-operators/openstack-operator-index-4bhsq" Dec 16 12:10:35 crc kubenswrapper[4805]: I1216 12:10:35.751949 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w287b\" (UniqueName: \"kubernetes.io/projected/42823d10-65ec-407c-93a4-98d27954a5f3-kube-api-access-w287b\") pod \"openstack-operator-index-4bhsq\" (UID: \"42823d10-65ec-407c-93a4-98d27954a5f3\") " pod="openstack-operators/openstack-operator-index-4bhsq" Dec 16 12:10:35 crc kubenswrapper[4805]: I1216 12:10:35.933841 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4bhsq" Dec 16 12:10:36 crc kubenswrapper[4805]: I1216 12:10:36.841507 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:36 crc kubenswrapper[4805]: I1216 12:10:36.906857 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:36 crc kubenswrapper[4805]: I1216 12:10:36.912338 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4bhsq"] Dec 16 12:10:37 crc kubenswrapper[4805]: I1216 12:10:37.264011 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4bhsq" event={"ID":"42823d10-65ec-407c-93a4-98d27954a5f3","Type":"ContainerStarted","Data":"d843d9ce9aba464a7de2e94005179d67e954f5697b698b8445e5a226644ef767"} Dec 16 12:10:37 crc kubenswrapper[4805]: I1216 12:10:37.493246 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-mw2xw" Dec 16 12:10:38 crc kubenswrapper[4805]: I1216 12:10:38.271401 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x2bbm" event={"ID":"b6f3b635-2be1-45b6-8d70-74a18697bc67","Type":"ContainerStarted","Data":"a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950"} Dec 16 12:10:38 crc kubenswrapper[4805]: I1216 12:10:38.271449 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-x2bbm" podUID="b6f3b635-2be1-45b6-8d70-74a18697bc67" containerName="registry-server" containerID="cri-o://a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950" gracePeriod=2 Dec 16 12:10:38 crc kubenswrapper[4805]: I1216 12:10:38.275921 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4bhsq" event={"ID":"42823d10-65ec-407c-93a4-98d27954a5f3","Type":"ContainerStarted","Data":"41592ac6ca6487b1e6be241ae34d043b21e5f175c994e1b33a436bee3a4e0856"} Dec 16 12:10:38 crc kubenswrapper[4805]: I1216 12:10:38.290985 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x2bbm" podStartSLOduration=2.2365131050000002 podStartE2EDuration="7.290966987s" podCreationTimestamp="2025-12-16 12:10:31 +0000 UTC" firstStartedPulling="2025-12-16 12:10:32.28560794 +0000 UTC m=+906.003865745" lastFinishedPulling="2025-12-16 12:10:37.340061822 +0000 UTC m=+911.058319627" observedRunningTime="2025-12-16 12:10:38.286438888 +0000 UTC m=+912.004696703" watchObservedRunningTime="2025-12-16 12:10:38.290966987 +0000 UTC m=+912.009224812" Dec 16 12:10:38 crc kubenswrapper[4805]: I1216 12:10:38.309972 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4bhsq" podStartSLOduration=2.8335803950000003 podStartE2EDuration="3.309954512s" podCreationTimestamp="2025-12-16 12:10:35 +0000 UTC" firstStartedPulling="2025-12-16 12:10:36.923246598 +0000 UTC m=+910.641504403" lastFinishedPulling="2025-12-16 12:10:37.399620715 +0000 UTC m=+911.117878520" observedRunningTime="2025-12-16 12:10:38.30502298 +0000 UTC m=+912.023280785" watchObservedRunningTime="2025-12-16 12:10:38.309954512 +0000 UTC m=+912.028212327" Dec 16 12:10:38 crc kubenswrapper[4805]: I1216 12:10:38.658652 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x2bbm" Dec 16 12:10:38 crc kubenswrapper[4805]: I1216 12:10:38.771869 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86vkk\" (UniqueName: \"kubernetes.io/projected/b6f3b635-2be1-45b6-8d70-74a18697bc67-kube-api-access-86vkk\") pod \"b6f3b635-2be1-45b6-8d70-74a18697bc67\" (UID: \"b6f3b635-2be1-45b6-8d70-74a18697bc67\") " Dec 16 12:10:38 crc kubenswrapper[4805]: I1216 12:10:38.777841 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6f3b635-2be1-45b6-8d70-74a18697bc67-kube-api-access-86vkk" (OuterVolumeSpecName: "kube-api-access-86vkk") pod "b6f3b635-2be1-45b6-8d70-74a18697bc67" (UID: "b6f3b635-2be1-45b6-8d70-74a18697bc67"). InnerVolumeSpecName "kube-api-access-86vkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:10:38 crc kubenswrapper[4805]: I1216 12:10:38.873771 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86vkk\" (UniqueName: \"kubernetes.io/projected/b6f3b635-2be1-45b6-8d70-74a18697bc67-kube-api-access-86vkk\") on node \"crc\" DevicePath \"\"" Dec 16 12:10:39 crc kubenswrapper[4805]: I1216 12:10:39.282831 4805 generic.go:334] "Generic (PLEG): container finished" podID="b6f3b635-2be1-45b6-8d70-74a18697bc67" containerID="a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950" exitCode=0 Dec 16 12:10:39 crc kubenswrapper[4805]: I1216 12:10:39.282894 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x2bbm" Dec 16 12:10:39 crc kubenswrapper[4805]: I1216 12:10:39.282937 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x2bbm" event={"ID":"b6f3b635-2be1-45b6-8d70-74a18697bc67","Type":"ContainerDied","Data":"a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950"} Dec 16 12:10:39 crc kubenswrapper[4805]: I1216 12:10:39.282991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x2bbm" event={"ID":"b6f3b635-2be1-45b6-8d70-74a18697bc67","Type":"ContainerDied","Data":"41e09dc189eba3fb8bf2391050159c76be3aa1ba2a6307cb1577a97bba81f6ed"} Dec 16 12:10:39 crc kubenswrapper[4805]: I1216 12:10:39.283017 4805 scope.go:117] "RemoveContainer" containerID="a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950" Dec 16 12:10:39 crc kubenswrapper[4805]: I1216 12:10:39.304609 4805 scope.go:117] "RemoveContainer" containerID="a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950" Dec 16 12:10:39 crc kubenswrapper[4805]: E1216 12:10:39.305784 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950\": container with ID starting with a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950 not found: ID does not exist" containerID="a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950" Dec 16 12:10:39 crc kubenswrapper[4805]: I1216 12:10:39.305833 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950"} err="failed to get container status \"a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950\": rpc error: code = NotFound desc = could not find container \"a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950\": container with ID starting with a39de28d44639d1ac8fee8ed3cd486314ec5ccf454efbf132bad2530f74c8950 not found: ID does not exist" Dec 16 12:10:39 crc kubenswrapper[4805]: I1216 12:10:39.317163 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x2bbm"] Dec 16 12:10:39 crc kubenswrapper[4805]: I1216 12:10:39.321162 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-x2bbm"] Dec 16 12:10:40 crc kubenswrapper[4805]: I1216 12:10:40.530179 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6f3b635-2be1-45b6-8d70-74a18697bc67" path="/var/lib/kubelet/pods/b6f3b635-2be1-45b6-8d70-74a18697bc67/volumes" Dec 16 12:10:45 crc kubenswrapper[4805]: I1216 12:10:45.933963 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-4bhsq" Dec 16 12:10:45 crc kubenswrapper[4805]: I1216 12:10:45.934623 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-4bhsq" Dec 16 12:10:45 crc kubenswrapper[4805]: I1216 12:10:45.963746 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-4bhsq" Dec 16 12:10:46 crc kubenswrapper[4805]: I1216 12:10:46.356748 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-4bhsq" Dec 16 12:10:46 crc kubenswrapper[4805]: I1216 12:10:46.836003 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-cq86h" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.449585 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-zqndj" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.850501 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq"] Dec 16 12:10:47 crc kubenswrapper[4805]: E1216 12:10:47.850792 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6f3b635-2be1-45b6-8d70-74a18697bc67" containerName="registry-server" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.850814 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6f3b635-2be1-45b6-8d70-74a18697bc67" containerName="registry-server" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.850971 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6f3b635-2be1-45b6-8d70-74a18697bc67" containerName="registry-server" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.852285 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.854838 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-6c4c9" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.860841 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq"] Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.862883 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw964\" (UniqueName: \"kubernetes.io/projected/8da86235-63b0-46ec-bca8-b68c248b2daa-kube-api-access-hw964\") pod \"128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.862967 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-bundle\") pod \"128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.863054 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-util\") pod \"128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.964263 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-util\") pod \"128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.964418 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw964\" (UniqueName: \"kubernetes.io/projected/8da86235-63b0-46ec-bca8-b68c248b2daa-kube-api-access-hw964\") pod \"128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.964512 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-bundle\") pod \"128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.964918 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-util\") pod \"128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:47 crc kubenswrapper[4805]: I1216 12:10:47.964932 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-bundle\") pod \"128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:48 crc kubenswrapper[4805]: I1216 12:10:48.002000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw964\" (UniqueName: \"kubernetes.io/projected/8da86235-63b0-46ec-bca8-b68c248b2daa-kube-api-access-hw964\") pod \"128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:48 crc kubenswrapper[4805]: I1216 12:10:48.171841 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:48 crc kubenswrapper[4805]: I1216 12:10:48.619002 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq"] Dec 16 12:10:48 crc kubenswrapper[4805]: W1216 12:10:48.622083 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da86235_63b0_46ec_bca8_b68c248b2daa.slice/crio-64390dc155864a3f26facfb0cd8ff37eb3cc0ed708577e0a38fb2304a97a9406 WatchSource:0}: Error finding container 64390dc155864a3f26facfb0cd8ff37eb3cc0ed708577e0a38fb2304a97a9406: Status 404 returned error can't find the container with id 64390dc155864a3f26facfb0cd8ff37eb3cc0ed708577e0a38fb2304a97a9406 Dec 16 12:10:49 crc kubenswrapper[4805]: I1216 12:10:49.354988 4805 generic.go:334] "Generic (PLEG): container finished" podID="8da86235-63b0-46ec-bca8-b68c248b2daa" containerID="59183dbc27f6e7e10c4885713afd663b448182cce82753e367a8bceb3398b941" exitCode=0 Dec 16 12:10:49 crc kubenswrapper[4805]: I1216 12:10:49.355072 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" event={"ID":"8da86235-63b0-46ec-bca8-b68c248b2daa","Type":"ContainerDied","Data":"59183dbc27f6e7e10c4885713afd663b448182cce82753e367a8bceb3398b941"} Dec 16 12:10:49 crc kubenswrapper[4805]: I1216 12:10:49.355282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" event={"ID":"8da86235-63b0-46ec-bca8-b68c248b2daa","Type":"ContainerStarted","Data":"64390dc155864a3f26facfb0cd8ff37eb3cc0ed708577e0a38fb2304a97a9406"} Dec 16 12:10:51 crc kubenswrapper[4805]: I1216 12:10:51.372573 4805 generic.go:334] "Generic (PLEG): container finished" podID="8da86235-63b0-46ec-bca8-b68c248b2daa" containerID="f9a8474163378dd27ff34444b9b3bba179d20182d23db78dee9472eae5fafd2b" exitCode=0 Dec 16 12:10:51 crc kubenswrapper[4805]: I1216 12:10:51.372660 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" event={"ID":"8da86235-63b0-46ec-bca8-b68c248b2daa","Type":"ContainerDied","Data":"f9a8474163378dd27ff34444b9b3bba179d20182d23db78dee9472eae5fafd2b"} Dec 16 12:10:52 crc kubenswrapper[4805]: I1216 12:10:52.381942 4805 generic.go:334] "Generic (PLEG): container finished" podID="8da86235-63b0-46ec-bca8-b68c248b2daa" containerID="8b34c46b8a7e8f03378ba0d85caf95d94e40b0f81bdc23c64f845eac7ae159d1" exitCode=0 Dec 16 12:10:52 crc kubenswrapper[4805]: I1216 12:10:52.382035 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" event={"ID":"8da86235-63b0-46ec-bca8-b68c248b2daa","Type":"ContainerDied","Data":"8b34c46b8a7e8f03378ba0d85caf95d94e40b0f81bdc23c64f845eac7ae159d1"} Dec 16 12:10:53 crc kubenswrapper[4805]: I1216 12:10:53.622586 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:53 crc kubenswrapper[4805]: I1216 12:10:53.782494 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-bundle\") pod \"8da86235-63b0-46ec-bca8-b68c248b2daa\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " Dec 16 12:10:53 crc kubenswrapper[4805]: I1216 12:10:53.782720 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-util\") pod \"8da86235-63b0-46ec-bca8-b68c248b2daa\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " Dec 16 12:10:53 crc kubenswrapper[4805]: I1216 12:10:53.783075 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw964\" (UniqueName: \"kubernetes.io/projected/8da86235-63b0-46ec-bca8-b68c248b2daa-kube-api-access-hw964\") pod \"8da86235-63b0-46ec-bca8-b68c248b2daa\" (UID: \"8da86235-63b0-46ec-bca8-b68c248b2daa\") " Dec 16 12:10:53 crc kubenswrapper[4805]: I1216 12:10:53.783311 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-bundle" (OuterVolumeSpecName: "bundle") pod "8da86235-63b0-46ec-bca8-b68c248b2daa" (UID: "8da86235-63b0-46ec-bca8-b68c248b2daa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:10:53 crc kubenswrapper[4805]: I1216 12:10:53.783857 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:10:53 crc kubenswrapper[4805]: I1216 12:10:53.792728 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da86235-63b0-46ec-bca8-b68c248b2daa-kube-api-access-hw964" (OuterVolumeSpecName: "kube-api-access-hw964") pod "8da86235-63b0-46ec-bca8-b68c248b2daa" (UID: "8da86235-63b0-46ec-bca8-b68c248b2daa"). InnerVolumeSpecName "kube-api-access-hw964". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:10:53 crc kubenswrapper[4805]: I1216 12:10:53.798705 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-util" (OuterVolumeSpecName: "util") pod "8da86235-63b0-46ec-bca8-b68c248b2daa" (UID: "8da86235-63b0-46ec-bca8-b68c248b2daa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:10:53 crc kubenswrapper[4805]: I1216 12:10:53.885535 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8da86235-63b0-46ec-bca8-b68c248b2daa-util\") on node \"crc\" DevicePath \"\"" Dec 16 12:10:53 crc kubenswrapper[4805]: I1216 12:10:53.885590 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw964\" (UniqueName: \"kubernetes.io/projected/8da86235-63b0-46ec-bca8-b68c248b2daa-kube-api-access-hw964\") on node \"crc\" DevicePath \"\"" Dec 16 12:10:54 crc kubenswrapper[4805]: I1216 12:10:54.398445 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" event={"ID":"8da86235-63b0-46ec-bca8-b68c248b2daa","Type":"ContainerDied","Data":"64390dc155864a3f26facfb0cd8ff37eb3cc0ed708577e0a38fb2304a97a9406"} Dec 16 12:10:54 crc kubenswrapper[4805]: I1216 12:10:54.398682 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64390dc155864a3f26facfb0cd8ff37eb3cc0ed708577e0a38fb2304a97a9406" Dec 16 12:10:54 crc kubenswrapper[4805]: I1216 12:10:54.398513 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.071420 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.071752 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.124453 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9w4vc"] Dec 16 12:10:57 crc kubenswrapper[4805]: E1216 12:10:57.124757 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da86235-63b0-46ec-bca8-b68c248b2daa" containerName="pull" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.124778 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da86235-63b0-46ec-bca8-b68c248b2daa" containerName="pull" Dec 16 12:10:57 crc kubenswrapper[4805]: E1216 12:10:57.124798 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da86235-63b0-46ec-bca8-b68c248b2daa" containerName="util" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.124804 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da86235-63b0-46ec-bca8-b68c248b2daa" containerName="util" Dec 16 12:10:57 crc kubenswrapper[4805]: E1216 12:10:57.124812 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da86235-63b0-46ec-bca8-b68c248b2daa" containerName="extract" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.124818 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da86235-63b0-46ec-bca8-b68c248b2daa" containerName="extract" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.124933 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da86235-63b0-46ec-bca8-b68c248b2daa" containerName="extract" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.126062 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.144368 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9w4vc"] Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.188157 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5-catalog-content\") pod \"community-operators-9w4vc\" (UID: \"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5\") " pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.188230 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcqbb\" (UniqueName: \"kubernetes.io/projected/c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5-kube-api-access-pcqbb\") pod \"community-operators-9w4vc\" (UID: \"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5\") " pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.188496 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5-utilities\") pod \"community-operators-9w4vc\" (UID: \"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5\") " pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.289441 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5-catalog-content\") pod \"community-operators-9w4vc\" (UID: \"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5\") " pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.289580 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcqbb\" (UniqueName: \"kubernetes.io/projected/c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5-kube-api-access-pcqbb\") pod \"community-operators-9w4vc\" (UID: \"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5\") " pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.289648 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5-utilities\") pod \"community-operators-9w4vc\" (UID: \"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5\") " pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.290054 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5-catalog-content\") pod \"community-operators-9w4vc\" (UID: \"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5\") " pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.290189 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5-utilities\") pod \"community-operators-9w4vc\" (UID: \"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5\") " pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.313406 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcqbb\" (UniqueName: \"kubernetes.io/projected/c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5-kube-api-access-pcqbb\") pod \"community-operators-9w4vc\" (UID: \"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5\") " pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.463495 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:10:57 crc kubenswrapper[4805]: I1216 12:10:57.820570 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9w4vc"] Dec 16 12:10:58 crc kubenswrapper[4805]: I1216 12:10:58.498828 4805 generic.go:334] "Generic (PLEG): container finished" podID="c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5" containerID="365f31f987294f485dc9f3c53d8b4d9380b216c058d9bf3526333cabc23f6389" exitCode=0 Dec 16 12:10:58 crc kubenswrapper[4805]: I1216 12:10:58.498946 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w4vc" event={"ID":"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5","Type":"ContainerDied","Data":"365f31f987294f485dc9f3c53d8b4d9380b216c058d9bf3526333cabc23f6389"} Dec 16 12:10:58 crc kubenswrapper[4805]: I1216 12:10:58.499112 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w4vc" event={"ID":"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5","Type":"ContainerStarted","Data":"a6be0ac28c7b32779dde9763cd6091e33500e0dd12cabdab6a27f736b064aec9"} Dec 16 12:10:59 crc kubenswrapper[4805]: I1216 12:10:59.206922 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z"] Dec 16 12:10:59 crc kubenswrapper[4805]: I1216 12:10:59.208455 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z" Dec 16 12:10:59 crc kubenswrapper[4805]: I1216 12:10:59.216118 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-5rlvc" Dec 16 12:10:59 crc kubenswrapper[4805]: I1216 12:10:59.261537 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z"] Dec 16 12:10:59 crc kubenswrapper[4805]: I1216 12:10:59.394184 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwf2\" (UniqueName: \"kubernetes.io/projected/990d476a-8cea-4e1a-8e5f-10fa313d23cb-kube-api-access-sjwf2\") pod \"openstack-operator-controller-operator-777d8df86-zk62z\" (UID: \"990d476a-8cea-4e1a-8e5f-10fa313d23cb\") " pod="openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z" Dec 16 12:10:59 crc kubenswrapper[4805]: I1216 12:10:59.495577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjwf2\" (UniqueName: \"kubernetes.io/projected/990d476a-8cea-4e1a-8e5f-10fa313d23cb-kube-api-access-sjwf2\") pod \"openstack-operator-controller-operator-777d8df86-zk62z\" (UID: \"990d476a-8cea-4e1a-8e5f-10fa313d23cb\") " pod="openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z" Dec 16 12:10:59 crc kubenswrapper[4805]: I1216 12:10:59.524463 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjwf2\" (UniqueName: \"kubernetes.io/projected/990d476a-8cea-4e1a-8e5f-10fa313d23cb-kube-api-access-sjwf2\") pod \"openstack-operator-controller-operator-777d8df86-zk62z\" (UID: \"990d476a-8cea-4e1a-8e5f-10fa313d23cb\") " pod="openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z" Dec 16 12:10:59 crc kubenswrapper[4805]: I1216 12:10:59.824872 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z" Dec 16 12:11:00 crc kubenswrapper[4805]: I1216 12:11:00.362183 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z"] Dec 16 12:11:00 crc kubenswrapper[4805]: I1216 12:11:00.519966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z" event={"ID":"990d476a-8cea-4e1a-8e5f-10fa313d23cb","Type":"ContainerStarted","Data":"3c70bee821e04c27068904145ec5f2d4b10f90fec3bdd118a2006c1f862978d4"} Dec 16 12:11:07 crc kubenswrapper[4805]: I1216 12:11:07.575249 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z" event={"ID":"990d476a-8cea-4e1a-8e5f-10fa313d23cb","Type":"ContainerStarted","Data":"4aa62accbe026ef5b087ffffcdc0fd49ff7c504d89f530f4bd25e9c060033511"} Dec 16 12:11:07 crc kubenswrapper[4805]: I1216 12:11:07.577698 4805 generic.go:334] "Generic (PLEG): container finished" podID="c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5" containerID="542033535e41558061d1a581c966574dd128b13d8e61b0d709d5c97351834e0a" exitCode=0 Dec 16 12:11:07 crc kubenswrapper[4805]: I1216 12:11:07.577738 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w4vc" event={"ID":"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5","Type":"ContainerDied","Data":"542033535e41558061d1a581c966574dd128b13d8e61b0d709d5c97351834e0a"} Dec 16 12:11:08 crc kubenswrapper[4805]: I1216 12:11:08.585498 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w4vc" event={"ID":"c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5","Type":"ContainerStarted","Data":"02f09cfd7cdc02394d4f4ef0e1948c6dae9965469d536de1167755f5612a83f2"} Dec 16 12:11:08 crc kubenswrapper[4805]: I1216 12:11:08.611856 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9w4vc" podStartSLOduration=1.964274732 podStartE2EDuration="11.611835706s" podCreationTimestamp="2025-12-16 12:10:57 +0000 UTC" firstStartedPulling="2025-12-16 12:10:58.500465956 +0000 UTC m=+932.218723771" lastFinishedPulling="2025-12-16 12:11:08.14802694 +0000 UTC m=+941.866284745" observedRunningTime="2025-12-16 12:11:08.60672571 +0000 UTC m=+942.324983515" watchObservedRunningTime="2025-12-16 12:11:08.611835706 +0000 UTC m=+942.330093531" Dec 16 12:11:12 crc kubenswrapper[4805]: I1216 12:11:12.615004 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z" event={"ID":"990d476a-8cea-4e1a-8e5f-10fa313d23cb","Type":"ContainerStarted","Data":"8b716cff17617749b396525e9f0ab9269bf72f1ea5e8cfdb7851bb7c5d8bee7c"} Dec 16 12:11:12 crc kubenswrapper[4805]: I1216 12:11:12.615546 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z" Dec 16 12:11:12 crc kubenswrapper[4805]: I1216 12:11:12.618993 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z" Dec 16 12:11:12 crc kubenswrapper[4805]: I1216 12:11:12.717723 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-777d8df86-zk62z" podStartSLOduration=2.3529748010000002 podStartE2EDuration="13.717704912s" podCreationTimestamp="2025-12-16 12:10:59 +0000 UTC" firstStartedPulling="2025-12-16 12:11:00.386531845 +0000 UTC m=+934.104789650" lastFinishedPulling="2025-12-16 12:11:11.751261956 +0000 UTC m=+945.469519761" observedRunningTime="2025-12-16 12:11:12.663328153 +0000 UTC m=+946.381585978" watchObservedRunningTime="2025-12-16 12:11:12.717704912 +0000 UTC m=+946.435962727" Dec 16 12:11:17 crc kubenswrapper[4805]: I1216 12:11:17.464180 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:11:17 crc kubenswrapper[4805]: I1216 12:11:17.464495 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:11:17 crc kubenswrapper[4805]: I1216 12:11:17.532782 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:11:17 crc kubenswrapper[4805]: I1216 12:11:17.704214 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9w4vc" Dec 16 12:11:17 crc kubenswrapper[4805]: I1216 12:11:17.984184 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9w4vc"] Dec 16 12:11:18 crc kubenswrapper[4805]: I1216 12:11:18.175777 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52zv4"] Dec 16 12:11:18 crc kubenswrapper[4805]: I1216 12:11:18.176011 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-52zv4" podUID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" containerName="registry-server" containerID="cri-o://4c5727932d6219cc0e43d01f809bbe4c70d98be622829bad8eb53568d19bc5cc" gracePeriod=2 Dec 16 12:11:20 crc kubenswrapper[4805]: I1216 12:11:20.668208 4805 generic.go:334] "Generic (PLEG): container finished" podID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" containerID="4c5727932d6219cc0e43d01f809bbe4c70d98be622829bad8eb53568d19bc5cc" exitCode=0 Dec 16 12:11:20 crc kubenswrapper[4805]: I1216 12:11:20.668672 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52zv4" event={"ID":"2dab2e93-e0fa-4714-8c47-5b09df4633c5","Type":"ContainerDied","Data":"4c5727932d6219cc0e43d01f809bbe4c70d98be622829bad8eb53568d19bc5cc"} Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.341716 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.504957 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-catalog-content\") pod \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.505024 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-utilities\") pod \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.505130 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcp6s\" (UniqueName: \"kubernetes.io/projected/2dab2e93-e0fa-4714-8c47-5b09df4633c5-kube-api-access-xcp6s\") pod \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\" (UID: \"2dab2e93-e0fa-4714-8c47-5b09df4633c5\") " Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.505937 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-utilities" (OuterVolumeSpecName: "utilities") pod "2dab2e93-e0fa-4714-8c47-5b09df4633c5" (UID: "2dab2e93-e0fa-4714-8c47-5b09df4633c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.560130 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dab2e93-e0fa-4714-8c47-5b09df4633c5-kube-api-access-xcp6s" (OuterVolumeSpecName: "kube-api-access-xcp6s") pod "2dab2e93-e0fa-4714-8c47-5b09df4633c5" (UID: "2dab2e93-e0fa-4714-8c47-5b09df4633c5"). InnerVolumeSpecName "kube-api-access-xcp6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.582767 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dab2e93-e0fa-4714-8c47-5b09df4633c5" (UID: "2dab2e93-e0fa-4714-8c47-5b09df4633c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.606835 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcp6s\" (UniqueName: \"kubernetes.io/projected/2dab2e93-e0fa-4714-8c47-5b09df4633c5-kube-api-access-xcp6s\") on node \"crc\" DevicePath \"\"" Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.606866 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.606876 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab2e93-e0fa-4714-8c47-5b09df4633c5-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.679266 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52zv4" event={"ID":"2dab2e93-e0fa-4714-8c47-5b09df4633c5","Type":"ContainerDied","Data":"2e75398637cc09d4cfffa7ee161c1ee9c91f7f0a2327ca644485ca145d17adec"} Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.679336 4805 scope.go:117] "RemoveContainer" containerID="4c5727932d6219cc0e43d01f809bbe4c70d98be622829bad8eb53568d19bc5cc" Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.679506 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52zv4" Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.721392 4805 scope.go:117] "RemoveContainer" containerID="5d50e1cbdb765973b860e42cc8024012c6b442b3fa2910758da12ea9151ffad2" Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.743743 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52zv4"] Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.747634 4805 scope.go:117] "RemoveContainer" containerID="693b6183565dd12ab504e64c5a43a68342a700a3876e208ca54ccd140a507de7" Dec 16 12:11:21 crc kubenswrapper[4805]: I1216 12:11:21.770592 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-52zv4"] Dec 16 12:11:22 crc kubenswrapper[4805]: I1216 12:11:22.530207 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" path="/var/lib/kubelet/pods/2dab2e93-e0fa-4714-8c47-5b09df4633c5/volumes" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.633665 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qhwx5"] Dec 16 12:11:25 crc kubenswrapper[4805]: E1216 12:11:25.634521 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" containerName="registry-server" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.634540 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" containerName="registry-server" Dec 16 12:11:25 crc kubenswrapper[4805]: E1216 12:11:25.634566 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" containerName="extract-content" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.634575 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" containerName="extract-content" Dec 16 12:11:25 crc kubenswrapper[4805]: E1216 12:11:25.634605 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" containerName="extract-utilities" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.634613 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" containerName="extract-utilities" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.634761 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dab2e93-e0fa-4714-8c47-5b09df4633c5" containerName="registry-server" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.635961 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.652013 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhwx5"] Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.778208 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-utilities\") pod \"redhat-marketplace-qhwx5\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.778291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2d52\" (UniqueName: \"kubernetes.io/projected/706f8077-edd1-4edf-95ae-9e9602b72758-kube-api-access-c2d52\") pod \"redhat-marketplace-qhwx5\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.778317 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-catalog-content\") pod \"redhat-marketplace-qhwx5\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.879506 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2d52\" (UniqueName: \"kubernetes.io/projected/706f8077-edd1-4edf-95ae-9e9602b72758-kube-api-access-c2d52\") pod \"redhat-marketplace-qhwx5\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.879557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-catalog-content\") pod \"redhat-marketplace-qhwx5\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.879621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-utilities\") pod \"redhat-marketplace-qhwx5\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.880112 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-utilities\") pod \"redhat-marketplace-qhwx5\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.880247 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-catalog-content\") pod \"redhat-marketplace-qhwx5\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.900985 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2d52\" (UniqueName: \"kubernetes.io/projected/706f8077-edd1-4edf-95ae-9e9602b72758-kube-api-access-c2d52\") pod \"redhat-marketplace-qhwx5\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:25 crc kubenswrapper[4805]: I1216 12:11:25.955318 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:26 crc kubenswrapper[4805]: I1216 12:11:26.592595 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhwx5"] Dec 16 12:11:26 crc kubenswrapper[4805]: I1216 12:11:26.795608 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhwx5" event={"ID":"706f8077-edd1-4edf-95ae-9e9602b72758","Type":"ContainerStarted","Data":"8d5f0aff2523d79be37f7598dc8cb3fbb9a8e39cd71a92ad01b068573430c9f5"} Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.071472 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.071573 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.076033 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.077492 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.081555 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-m8k44" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.097995 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.134354 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.137314 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.141336 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-wt8zt" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.149874 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-669b58f65-782cb"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.151011 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.162219 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.162302 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-2n292" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.163374 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.164586 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.170517 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-rzrvm" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.178500 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-669b58f65-782cb"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.200450 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.222306 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx9l2\" (UniqueName: \"kubernetes.io/projected/81b6ebe6-984f-4ecc-9d75-ac78097f7af2-kube-api-access-nx9l2\") pod \"barbican-operator-controller-manager-bb565c8dd-5gtrz\" (UID: \"81b6ebe6-984f-4ecc-9d75-ac78097f7af2\") " pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.222368 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcszj\" (UniqueName: \"kubernetes.io/projected/4a51f724-a3be-4ef5-acd6-84891873147b-kube-api-access-pcszj\") pod \"designate-operator-controller-manager-69977bdf55-t27s6\" (UID: \"4a51f724-a3be-4ef5-acd6-84891873147b\") " pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.222433 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8pqc\" (UniqueName: \"kubernetes.io/projected/b738fc79-2b52-4759-a3ee-72e0946df392-kube-api-access-l8pqc\") pod \"glance-operator-controller-manager-5847f67c56-pp6mh\" (UID: \"b738fc79-2b52-4759-a3ee-72e0946df392\") " pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.222501 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmk5s\" (UniqueName: \"kubernetes.io/projected/f31accc0-70b7-4014-ac71-679dc729ed80-kube-api-access-tmk5s\") pod \"cinder-operator-controller-manager-669b58f65-782cb\" (UID: \"f31accc0-70b7-4014-ac71-679dc729ed80\") " pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.260310 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.261898 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.263921 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-lbldk" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.271777 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.352727 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx9l2\" (UniqueName: \"kubernetes.io/projected/81b6ebe6-984f-4ecc-9d75-ac78097f7af2-kube-api-access-nx9l2\") pod \"barbican-operator-controller-manager-bb565c8dd-5gtrz\" (UID: \"81b6ebe6-984f-4ecc-9d75-ac78097f7af2\") " pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.352772 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcszj\" (UniqueName: \"kubernetes.io/projected/4a51f724-a3be-4ef5-acd6-84891873147b-kube-api-access-pcszj\") pod \"designate-operator-controller-manager-69977bdf55-t27s6\" (UID: \"4a51f724-a3be-4ef5-acd6-84891873147b\") " pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.352827 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8pqc\" (UniqueName: \"kubernetes.io/projected/b738fc79-2b52-4759-a3ee-72e0946df392-kube-api-access-l8pqc\") pod \"glance-operator-controller-manager-5847f67c56-pp6mh\" (UID: \"b738fc79-2b52-4759-a3ee-72e0946df392\") " pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.352885 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmk5s\" (UniqueName: \"kubernetes.io/projected/f31accc0-70b7-4014-ac71-679dc729ed80-kube-api-access-tmk5s\") pod \"cinder-operator-controller-manager-669b58f65-782cb\" (UID: \"f31accc0-70b7-4014-ac71-679dc729ed80\") " pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.353196 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.362420 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-g7lqf" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.366280 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.381183 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.402068 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmk5s\" (UniqueName: \"kubernetes.io/projected/f31accc0-70b7-4014-ac71-679dc729ed80-kube-api-access-tmk5s\") pod \"cinder-operator-controller-manager-669b58f65-782cb\" (UID: \"f31accc0-70b7-4014-ac71-679dc729ed80\") " pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.402681 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcszj\" (UniqueName: \"kubernetes.io/projected/4a51f724-a3be-4ef5-acd6-84891873147b-kube-api-access-pcszj\") pod \"designate-operator-controller-manager-69977bdf55-t27s6\" (UID: \"4a51f724-a3be-4ef5-acd6-84891873147b\") " pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.409839 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx9l2\" (UniqueName: \"kubernetes.io/projected/81b6ebe6-984f-4ecc-9d75-ac78097f7af2-kube-api-access-nx9l2\") pod \"barbican-operator-controller-manager-bb565c8dd-5gtrz\" (UID: \"81b6ebe6-984f-4ecc-9d75-ac78097f7af2\") " pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.411337 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8pqc\" (UniqueName: \"kubernetes.io/projected/b738fc79-2b52-4759-a3ee-72e0946df392-kube-api-access-l8pqc\") pod \"glance-operator-controller-manager-5847f67c56-pp6mh\" (UID: \"b738fc79-2b52-4759-a3ee-72e0946df392\") " pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.453780 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7cx\" (UniqueName: \"kubernetes.io/projected/37dfd47c-0789-4054-a4c7-37cff4d15b15-kube-api-access-rc7cx\") pod \"heat-operator-controller-manager-7b45cd6d68-g5msx\" (UID: \"37dfd47c-0789-4054-a4c7-37cff4d15b15\") " pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.453873 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvkxs\" (UniqueName: \"kubernetes.io/projected/81719820-96fa-418d-9d0b-18ba90027850-kube-api-access-kvkxs\") pod \"horizon-operator-controller-manager-6985cf78fb-rfvkc\" (UID: \"81719820-96fa-418d-9d0b-18ba90027850\") " pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.462681 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.476535 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.498480 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.500741 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.504567 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.513625 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.514986 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.535670 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.535843 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9tpgs" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.537361 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.566011 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.566693 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-cg456" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.610937 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7cx\" (UniqueName: \"kubernetes.io/projected/37dfd47c-0789-4054-a4c7-37cff4d15b15-kube-api-access-rc7cx\") pod \"heat-operator-controller-manager-7b45cd6d68-g5msx\" (UID: \"37dfd47c-0789-4054-a4c7-37cff4d15b15\") " pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.611088 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbft9\" (UniqueName: \"kubernetes.io/projected/0718fce0-14e8-434b-be98-ef48ec6059f3-kube-api-access-hbft9\") pod \"ironic-operator-controller-manager-54fd9dc4b5-tszdx\" (UID: \"0718fce0-14e8-434b-be98-ef48ec6059f3\") " pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.611123 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvkxs\" (UniqueName: \"kubernetes.io/projected/81719820-96fa-418d-9d0b-18ba90027850-kube-api-access-kvkxs\") pod \"horizon-operator-controller-manager-6985cf78fb-rfvkc\" (UID: \"81719820-96fa-418d-9d0b-18ba90027850\") " pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.611209 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e817150a-4845-4d56-8dd0-229394b946db-cert\") pod \"infra-operator-controller-manager-85d55b5858-4gk2l\" (UID: \"e817150a-4845-4d56-8dd0-229394b946db\") " pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.611232 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgfn5\" (UniqueName: \"kubernetes.io/projected/e817150a-4845-4d56-8dd0-229394b946db-kube-api-access-vgfn5\") pod \"infra-operator-controller-manager-85d55b5858-4gk2l\" (UID: \"e817150a-4845-4d56-8dd0-229394b946db\") " pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.619004 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.620002 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.620708 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.640832 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gds62" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.645498 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.648581 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-vpk9j" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.658953 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.664739 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.684282 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvkxs\" (UniqueName: \"kubernetes.io/projected/81719820-96fa-418d-9d0b-18ba90027850-kube-api-access-kvkxs\") pod \"horizon-operator-controller-manager-6985cf78fb-rfvkc\" (UID: \"81719820-96fa-418d-9d0b-18ba90027850\") " pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.684778 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-h2w2l" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.692081 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7cx\" (UniqueName: \"kubernetes.io/projected/37dfd47c-0789-4054-a4c7-37cff4d15b15-kube-api-access-rc7cx\") pod \"heat-operator-controller-manager-7b45cd6d68-g5msx\" (UID: \"37dfd47c-0789-4054-a4c7-37cff4d15b15\") " pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.715948 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbft9\" (UniqueName: \"kubernetes.io/projected/0718fce0-14e8-434b-be98-ef48ec6059f3-kube-api-access-hbft9\") pod \"ironic-operator-controller-manager-54fd9dc4b5-tszdx\" (UID: \"0718fce0-14e8-434b-be98-ef48ec6059f3\") " pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.716007 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e817150a-4845-4d56-8dd0-229394b946db-cert\") pod \"infra-operator-controller-manager-85d55b5858-4gk2l\" (UID: \"e817150a-4845-4d56-8dd0-229394b946db\") " pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.716029 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgfn5\" (UniqueName: \"kubernetes.io/projected/e817150a-4845-4d56-8dd0-229394b946db-kube-api-access-vgfn5\") pod \"infra-operator-controller-manager-85d55b5858-4gk2l\" (UID: \"e817150a-4845-4d56-8dd0-229394b946db\") " pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.716353 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" Dec 16 12:11:27 crc kubenswrapper[4805]: E1216 12:11:27.716451 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 16 12:11:27 crc kubenswrapper[4805]: E1216 12:11:27.716499 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e817150a-4845-4d56-8dd0-229394b946db-cert podName:e817150a-4845-4d56-8dd0-229394b946db nodeName:}" failed. No retries permitted until 2025-12-16 12:11:28.216483812 +0000 UTC m=+961.934741607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e817150a-4845-4d56-8dd0-229394b946db-cert") pod "infra-operator-controller-manager-85d55b5858-4gk2l" (UID: "e817150a-4845-4d56-8dd0-229394b946db") : secret "infra-operator-webhook-server-cert" not found Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.749316 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.787597 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.796469 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbft9\" (UniqueName: \"kubernetes.io/projected/0718fce0-14e8-434b-be98-ef48ec6059f3-kube-api-access-hbft9\") pod \"ironic-operator-controller-manager-54fd9dc4b5-tszdx\" (UID: \"0718fce0-14e8-434b-be98-ef48ec6059f3\") " pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.812755 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.821371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrbr9\" (UniqueName: \"kubernetes.io/projected/57bf5f89-e14d-442f-8064-2c0ca66139c4-kube-api-access-lrbr9\") pod \"neutron-operator-controller-manager-58879495c-pqrdf\" (UID: \"57bf5f89-e14d-442f-8064-2c0ca66139c4\") " pod="openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.821450 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knvsw\" (UniqueName: \"kubernetes.io/projected/ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96-kube-api-access-knvsw\") pod \"keystone-operator-controller-manager-7f764db9b-hq9vm\" (UID: \"ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96\") " pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.821576 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs9rk\" (UniqueName: \"kubernetes.io/projected/0e8bdc0b-046a-4513-9ed6-3350f94faea5-kube-api-access-zs9rk\") pod \"manila-operator-controller-manager-7cc599445b-b76nx\" (UID: \"0e8bdc0b-046a-4513-9ed6-3350f94faea5\") " pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.825568 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgfn5\" (UniqueName: \"kubernetes.io/projected/e817150a-4845-4d56-8dd0-229394b946db-kube-api-access-vgfn5\") pod \"infra-operator-controller-manager-85d55b5858-4gk2l\" (UID: \"e817150a-4845-4d56-8dd0-229394b946db\") " pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.850797 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.914115 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nq6tf" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.930467 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs9rk\" (UniqueName: \"kubernetes.io/projected/0e8bdc0b-046a-4513-9ed6-3350f94faea5-kube-api-access-zs9rk\") pod \"manila-operator-controller-manager-7cc599445b-b76nx\" (UID: \"0e8bdc0b-046a-4513-9ed6-3350f94faea5\") " pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.930548 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrbr9\" (UniqueName: \"kubernetes.io/projected/57bf5f89-e14d-442f-8064-2c0ca66139c4-kube-api-access-lrbr9\") pod \"neutron-operator-controller-manager-58879495c-pqrdf\" (UID: \"57bf5f89-e14d-442f-8064-2c0ca66139c4\") " pod="openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.930568 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knvsw\" (UniqueName: \"kubernetes.io/projected/ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96-kube-api-access-knvsw\") pod \"keystone-operator-controller-manager-7f764db9b-hq9vm\" (UID: \"ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96\") " pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.953715 4805 generic.go:334] "Generic (PLEG): container finished" podID="706f8077-edd1-4edf-95ae-9e9602b72758" containerID="c69dce834f75a1ff830390a7800570676913c0613d0e7b9e19ce5b756783f0c9" exitCode=0 Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.954753 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.955896 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhwx5" event={"ID":"706f8077-edd1-4edf-95ae-9e9602b72758","Type":"ContainerDied","Data":"c69dce834f75a1ff830390a7800570676913c0613d0e7b9e19ce5b756783f0c9"} Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.955969 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.967491 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-mkfn2" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.967903 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.978807 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn"] Dec 16 12:11:27 crc kubenswrapper[4805]: I1216 12:11:27.987446 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.006933 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrbr9\" (UniqueName: \"kubernetes.io/projected/57bf5f89-e14d-442f-8064-2c0ca66139c4-kube-api-access-lrbr9\") pod \"neutron-operator-controller-manager-58879495c-pqrdf\" (UID: \"57bf5f89-e14d-442f-8064-2c0ca66139c4\") " pod="openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.006997 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.015651 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs9rk\" (UniqueName: \"kubernetes.io/projected/0e8bdc0b-046a-4513-9ed6-3350f94faea5-kube-api-access-zs9rk\") pod \"manila-operator-controller-manager-7cc599445b-b76nx\" (UID: \"0e8bdc0b-046a-4513-9ed6-3350f94faea5\") " pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.035049 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7hkg\" (UniqueName: \"kubernetes.io/projected/93f2c029-57dc-47fc-9c2e-18f2710ff53e-kube-api-access-d7hkg\") pod \"mariadb-operator-controller-manager-64d7c556cd-xhhsn\" (UID: \"93f2c029-57dc-47fc-9c2e-18f2710ff53e\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.054941 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.056050 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.056156 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.070616 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knvsw\" (UniqueName: \"kubernetes.io/projected/ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96-kube-api-access-knvsw\") pod \"keystone-operator-controller-manager-7f764db9b-hq9vm\" (UID: \"ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96\") " pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.078318 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-fn9m7" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.092539 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.133293 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.136055 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7hkg\" (UniqueName: \"kubernetes.io/projected/93f2c029-57dc-47fc-9c2e-18f2710ff53e-kube-api-access-d7hkg\") pod \"mariadb-operator-controller-manager-64d7c556cd-xhhsn\" (UID: \"93f2c029-57dc-47fc-9c2e-18f2710ff53e\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.136157 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l777l\" (UniqueName: \"kubernetes.io/projected/df737798-a34c-4142-88c2-592096b02f85-kube-api-access-l777l\") pod \"octavia-operator-controller-manager-d5fb87cb8-p6wbs\" (UID: \"df737798-a34c-4142-88c2-592096b02f85\") " pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.158769 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7hkg\" (UniqueName: \"kubernetes.io/projected/93f2c029-57dc-47fc-9c2e-18f2710ff53e-kube-api-access-d7hkg\") pod \"mariadb-operator-controller-manager-64d7c556cd-xhhsn\" (UID: \"93f2c029-57dc-47fc-9c2e-18f2710ff53e\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.166835 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.168026 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.169044 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.176013 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fzddr" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.176210 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.207815 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.209031 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.227506 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.229959 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jggkv" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.239618 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.240000 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l777l\" (UniqueName: \"kubernetes.io/projected/df737798-a34c-4142-88c2-592096b02f85-kube-api-access-l777l\") pod \"octavia-operator-controller-manager-d5fb87cb8-p6wbs\" (UID: \"df737798-a34c-4142-88c2-592096b02f85\") " pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.241937 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.253367 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e817150a-4845-4d56-8dd0-229394b946db-cert\") pod \"infra-operator-controller-manager-85d55b5858-4gk2l\" (UID: \"e817150a-4845-4d56-8dd0-229394b946db\") " pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.253672 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xwn9\" (UniqueName: \"kubernetes.io/projected/b885ab69-dc83-439c-9040-09fc3d238093-kube-api-access-9xwn9\") pod \"nova-operator-controller-manager-6b444986fd-5mnhx\" (UID: \"b885ab69-dc83-439c-9040-09fc3d238093\") " pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" Dec 16 12:11:28 crc kubenswrapper[4805]: E1216 12:11:28.253861 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 16 12:11:28 crc kubenswrapper[4805]: E1216 12:11:28.253903 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e817150a-4845-4d56-8dd0-229394b946db-cert podName:e817150a-4845-4d56-8dd0-229394b946db nodeName:}" failed. No retries permitted until 2025-12-16 12:11:29.253887128 +0000 UTC m=+962.972144933 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e817150a-4845-4d56-8dd0-229394b946db-cert") pod "infra-operator-controller-manager-85d55b5858-4gk2l" (UID: "e817150a-4845-4d56-8dd0-229394b946db") : secret "infra-operator-webhook-server-cert" not found Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.292380 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.304806 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.305016 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l777l\" (UniqueName: \"kubernetes.io/projected/df737798-a34c-4142-88c2-592096b02f85-kube-api-access-l777l\") pod \"octavia-operator-controller-manager-d5fb87cb8-p6wbs\" (UID: \"df737798-a34c-4142-88c2-592096b02f85\") " pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.305841 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.318748 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9kr4z" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.335313 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-cc776f956-smg8x"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.336636 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.339491 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-pnb8w" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.355015 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmfk8\" (UniqueName: \"kubernetes.io/projected/3b82dc59-a470-4665-8271-3bbcfecb73f1-kube-api-access-jmfk8\") pod \"openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk\" (UID: \"3b82dc59-a470-4665-8271-3bbcfecb73f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.355080 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b82dc59-a470-4665-8271-3bbcfecb73f1-cert\") pod \"openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk\" (UID: \"3b82dc59-a470-4665-8271-3bbcfecb73f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.355108 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xwn9\" (UniqueName: \"kubernetes.io/projected/b885ab69-dc83-439c-9040-09fc3d238093-kube-api-access-9xwn9\") pod \"nova-operator-controller-manager-6b444986fd-5mnhx\" (UID: \"b885ab69-dc83-439c-9040-09fc3d238093\") " pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.355160 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d59b\" (UniqueName: \"kubernetes.io/projected/5581487f-dd20-4fb5-99b7-c6cfb197e548-kube-api-access-2d59b\") pod \"ovn-operator-controller-manager-5b67cfc8fb-l26rt\" (UID: \"5581487f-dd20-4fb5-99b7-c6cfb197e548\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.364638 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.384448 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.407213 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.408641 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.415933 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xwn9\" (UniqueName: \"kubernetes.io/projected/b885ab69-dc83-439c-9040-09fc3d238093-kube-api-access-9xwn9\") pod \"nova-operator-controller-manager-6b444986fd-5mnhx\" (UID: \"b885ab69-dc83-439c-9040-09fc3d238093\") " pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.422627 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-px5kf" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.456414 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzxnj\" (UniqueName: \"kubernetes.io/projected/9b3aad50-49b1-43c0-84c9-15368e69abae-kube-api-access-xzxnj\") pod \"placement-operator-controller-manager-cc776f956-smg8x\" (UID: \"9b3aad50-49b1-43c0-84c9-15368e69abae\") " pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.456458 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d59b\" (UniqueName: \"kubernetes.io/projected/5581487f-dd20-4fb5-99b7-c6cfb197e548-kube-api-access-2d59b\") pod \"ovn-operator-controller-manager-5b67cfc8fb-l26rt\" (UID: \"5581487f-dd20-4fb5-99b7-c6cfb197e548\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.456524 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmfk8\" (UniqueName: \"kubernetes.io/projected/3b82dc59-a470-4665-8271-3bbcfecb73f1-kube-api-access-jmfk8\") pod \"openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk\" (UID: \"3b82dc59-a470-4665-8271-3bbcfecb73f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.456563 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b82dc59-a470-4665-8271-3bbcfecb73f1-cert\") pod \"openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk\" (UID: \"3b82dc59-a470-4665-8271-3bbcfecb73f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.456583 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99l42\" (UniqueName: \"kubernetes.io/projected/bbd2ad8a-7239-4e25-bfbd-a009e826a337-kube-api-access-99l42\") pod \"swift-operator-controller-manager-7c9ff8845d-swkll\" (UID: \"bbd2ad8a-7239-4e25-bfbd-a009e826a337\") " pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" Dec 16 12:11:28 crc kubenswrapper[4805]: E1216 12:11:28.457014 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 16 12:11:28 crc kubenswrapper[4805]: E1216 12:11:28.457056 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b82dc59-a470-4665-8271-3bbcfecb73f1-cert podName:3b82dc59-a470-4665-8271-3bbcfecb73f1 nodeName:}" failed. No retries permitted until 2025-12-16 12:11:28.957044042 +0000 UTC m=+962.675301847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3b82dc59-a470-4665-8271-3bbcfecb73f1-cert") pod "openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" (UID: "3b82dc59-a470-4665-8271-3bbcfecb73f1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.466237 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.512904 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmfk8\" (UniqueName: \"kubernetes.io/projected/3b82dc59-a470-4665-8271-3bbcfecb73f1-kube-api-access-jmfk8\") pod \"openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk\" (UID: \"3b82dc59-a470-4665-8271-3bbcfecb73f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.559428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99l42\" (UniqueName: \"kubernetes.io/projected/bbd2ad8a-7239-4e25-bfbd-a009e826a337-kube-api-access-99l42\") pod \"swift-operator-controller-manager-7c9ff8845d-swkll\" (UID: \"bbd2ad8a-7239-4e25-bfbd-a009e826a337\") " pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.559479 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhsl\" (UniqueName: \"kubernetes.io/projected/f5b07707-f2f4-4664-9522-268f8ee833db-kube-api-access-wqhsl\") pod \"telemetry-operator-controller-manager-6bc5b9c47-tdd6s\" (UID: \"f5b07707-f2f4-4664-9522-268f8ee833db\") " pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.559512 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzxnj\" (UniqueName: \"kubernetes.io/projected/9b3aad50-49b1-43c0-84c9-15368e69abae-kube-api-access-xzxnj\") pod \"placement-operator-controller-manager-cc776f956-smg8x\" (UID: \"9b3aad50-49b1-43c0-84c9-15368e69abae\") " pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.589319 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d59b\" (UniqueName: \"kubernetes.io/projected/5581487f-dd20-4fb5-99b7-c6cfb197e548-kube-api-access-2d59b\") pod \"ovn-operator-controller-manager-5b67cfc8fb-l26rt\" (UID: \"5581487f-dd20-4fb5-99b7-c6cfb197e548\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.620690 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.621187 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-cc776f956-smg8x"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.621222 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.621234 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.628819 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.632508 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-5v9tz" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.636458 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99l42\" (UniqueName: \"kubernetes.io/projected/bbd2ad8a-7239-4e25-bfbd-a009e826a337-kube-api-access-99l42\") pod \"swift-operator-controller-manager-7c9ff8845d-swkll\" (UID: \"bbd2ad8a-7239-4e25-bfbd-a009e826a337\") " pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.640789 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzxnj\" (UniqueName: \"kubernetes.io/projected/9b3aad50-49b1-43c0-84c9-15368e69abae-kube-api-access-xzxnj\") pod \"placement-operator-controller-manager-cc776f956-smg8x\" (UID: \"9b3aad50-49b1-43c0-84c9-15368e69abae\") " pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.644182 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.656272 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.681055 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mfsvv" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.682532 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqhsl\" (UniqueName: \"kubernetes.io/projected/f5b07707-f2f4-4664-9522-268f8ee833db-kube-api-access-wqhsl\") pod \"telemetry-operator-controller-manager-6bc5b9c47-tdd6s\" (UID: \"f5b07707-f2f4-4664-9522-268f8ee833db\") " pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.684476 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.713767 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.740630 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqhsl\" (UniqueName: \"kubernetes.io/projected/f5b07707-f2f4-4664-9522-268f8ee833db-kube-api-access-wqhsl\") pod \"telemetry-operator-controller-manager-6bc5b9c47-tdd6s\" (UID: \"f5b07707-f2f4-4664-9522-268f8ee833db\") " pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.752152 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.785214 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq"] Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.788116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98p6q\" (UniqueName: \"kubernetes.io/projected/39e02343-d3e2-4e57-b38e-1b275f3cb29d-kube-api-access-98p6q\") pod \"watcher-operator-controller-manager-658bc5c8c5-wlr8s\" (UID: \"39e02343-d3e2-4e57-b38e-1b275f3cb29d\") " pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.788405 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbl77\" (UniqueName: \"kubernetes.io/projected/68475cd9-8ddd-44c5-ae7e-446bc92bb188-kube-api-access-lbl77\") pod \"test-operator-controller-manager-5d79c6465c-nldwq\" (UID: \"68475cd9-8ddd-44c5-ae7e-446bc92bb188\") " pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.810434 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.890260 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98p6q\" (UniqueName: \"kubernetes.io/projected/39e02343-d3e2-4e57-b38e-1b275f3cb29d-kube-api-access-98p6q\") pod \"watcher-operator-controller-manager-658bc5c8c5-wlr8s\" (UID: \"39e02343-d3e2-4e57-b38e-1b275f3cb29d\") " pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.890343 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbl77\" (UniqueName: \"kubernetes.io/projected/68475cd9-8ddd-44c5-ae7e-446bc92bb188-kube-api-access-lbl77\") pod \"test-operator-controller-manager-5d79c6465c-nldwq\" (UID: \"68475cd9-8ddd-44c5-ae7e-446bc92bb188\") " pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.949608 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbl77\" (UniqueName: \"kubernetes.io/projected/68475cd9-8ddd-44c5-ae7e-446bc92bb188-kube-api-access-lbl77\") pod \"test-operator-controller-manager-5d79c6465c-nldwq\" (UID: \"68475cd9-8ddd-44c5-ae7e-446bc92bb188\") " pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" Dec 16 12:11:28 crc kubenswrapper[4805]: I1216 12:11:28.960917 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98p6q\" (UniqueName: \"kubernetes.io/projected/39e02343-d3e2-4e57-b38e-1b275f3cb29d-kube-api-access-98p6q\") pod \"watcher-operator-controller-manager-658bc5c8c5-wlr8s\" (UID: \"39e02343-d3e2-4e57-b38e-1b275f3cb29d\") " pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.000018 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b82dc59-a470-4665-8271-3bbcfecb73f1-cert\") pod \"openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk\" (UID: \"3b82dc59-a470-4665-8271-3bbcfecb73f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:11:29 crc kubenswrapper[4805]: E1216 12:11:29.000265 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 16 12:11:29 crc kubenswrapper[4805]: E1216 12:11:29.000317 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b82dc59-a470-4665-8271-3bbcfecb73f1-cert podName:3b82dc59-a470-4665-8271-3bbcfecb73f1 nodeName:}" failed. No retries permitted until 2025-12-16 12:11:30.000300836 +0000 UTC m=+963.718558651 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3b82dc59-a470-4665-8271-3bbcfecb73f1-cert") pod "openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" (UID: "3b82dc59-a470-4665-8271-3bbcfecb73f1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.013765 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.042834 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb"] Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.043229 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.044064 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.047762 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.047981 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-z4ns8" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.069316 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.076026 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb"] Dec 16 12:11:29 crc kubenswrapper[4805]: W1216 12:11:29.117817 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb738fc79_2b52_4759_a3ee_72e0946df392.slice/crio-4e6062540785efb2ec0e90b22d0026430fdc40a544d863a7e110761e4f1655f0 WatchSource:0}: Error finding container 4e6062540785efb2ec0e90b22d0026430fdc40a544d863a7e110761e4f1655f0: Status 404 returned error can't find the container with id 4e6062540785efb2ec0e90b22d0026430fdc40a544d863a7e110761e4f1655f0 Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.171443 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh"] Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.197677 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn"] Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.198697 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.209657 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6thx8" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.211551 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl794\" (UniqueName: \"kubernetes.io/projected/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-kube-api-access-vl794\") pod \"openstack-operator-controller-manager-54798f4d5-64lpb\" (UID: \"cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b\") " pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.211787 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-cert\") pod \"openstack-operator-controller-manager-54798f4d5-64lpb\" (UID: \"cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b\") " pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.238487 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn"] Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.288649 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-669b58f65-782cb"] Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.306651 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6"] Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.313433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-cert\") pod \"openstack-operator-controller-manager-54798f4d5-64lpb\" (UID: \"cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b\") " pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.313480 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e817150a-4845-4d56-8dd0-229394b946db-cert\") pod \"infra-operator-controller-manager-85d55b5858-4gk2l\" (UID: \"e817150a-4845-4d56-8dd0-229394b946db\") " pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.313530 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbg9c\" (UniqueName: \"kubernetes.io/projected/418e4014-2b81-4b93-a665-ca28d1e1d7ee-kube-api-access-vbg9c\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn\" (UID: \"418e4014-2b81-4b93-a665-ca28d1e1d7ee\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.313550 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl794\" (UniqueName: \"kubernetes.io/projected/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-kube-api-access-vl794\") pod \"openstack-operator-controller-manager-54798f4d5-64lpb\" (UID: \"cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b\") " pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:29 crc kubenswrapper[4805]: E1216 12:11:29.313859 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 16 12:11:29 crc kubenswrapper[4805]: E1216 12:11:29.313896 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-cert podName:cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b nodeName:}" failed. No retries permitted until 2025-12-16 12:11:29.813883375 +0000 UTC m=+963.532141180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-cert") pod "openstack-operator-controller-manager-54798f4d5-64lpb" (UID: "cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b") : secret "webhook-server-cert" not found Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.324231 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx"] Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.324806 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e817150a-4845-4d56-8dd0-229394b946db-cert\") pod \"infra-operator-controller-manager-85d55b5858-4gk2l\" (UID: \"e817150a-4845-4d56-8dd0-229394b946db\") " pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.336994 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz"] Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.375783 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl794\" (UniqueName: \"kubernetes.io/projected/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-kube-api-access-vl794\") pod \"openstack-operator-controller-manager-54798f4d5-64lpb\" (UID: \"cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b\") " pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.415040 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbg9c\" (UniqueName: \"kubernetes.io/projected/418e4014-2b81-4b93-a665-ca28d1e1d7ee-kube-api-access-vbg9c\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn\" (UID: \"418e4014-2b81-4b93-a665-ca28d1e1d7ee\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.430506 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.438507 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbg9c\" (UniqueName: \"kubernetes.io/projected/418e4014-2b81-4b93-a665-ca28d1e1d7ee-kube-api-access-vbg9c\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn\" (UID: \"418e4014-2b81-4b93-a665-ca28d1e1d7ee\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn" Dec 16 12:11:29 crc kubenswrapper[4805]: W1216 12:11:29.532713 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0718fce0_14e8_434b_be98_ef48ec6059f3.slice/crio-763bfe9906b180f25d44578c88a5206f2666d8c4038964d8a7416f439e32b927 WatchSource:0}: Error finding container 763bfe9906b180f25d44578c88a5206f2666d8c4038964d8a7416f439e32b927: Status 404 returned error can't find the container with id 763bfe9906b180f25d44578c88a5206f2666d8c4038964d8a7416f439e32b927 Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.662299 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx"] Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.671584 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn" Dec 16 12:11:29 crc kubenswrapper[4805]: I1216 12:11:29.832712 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-cert\") pod \"openstack-operator-controller-manager-54798f4d5-64lpb\" (UID: \"cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b\") " pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:29 crc kubenswrapper[4805]: E1216 12:11:29.833084 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 16 12:11:29 crc kubenswrapper[4805]: E1216 12:11:29.833428 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-cert podName:cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b nodeName:}" failed. No retries permitted until 2025-12-16 12:11:30.833412189 +0000 UTC m=+964.551669994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-cert") pod "openstack-operator-controller-manager-54798f4d5-64lpb" (UID: "cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b") : secret "webhook-server-cert" not found Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.028888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" event={"ID":"37dfd47c-0789-4054-a4c7-37cff4d15b15","Type":"ContainerStarted","Data":"a2d1d70d0293ce88aebb3f141a77173ecad3a7970c833a41686619fd4ce474dd"} Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.035349 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" event={"ID":"b738fc79-2b52-4759-a3ee-72e0946df392","Type":"ContainerStarted","Data":"4e6062540785efb2ec0e90b22d0026430fdc40a544d863a7e110761e4f1655f0"} Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.041000 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b82dc59-a470-4665-8271-3bbcfecb73f1-cert\") pod \"openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk\" (UID: \"3b82dc59-a470-4665-8271-3bbcfecb73f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.053926 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" event={"ID":"81b6ebe6-984f-4ecc-9d75-ac78097f7af2","Type":"ContainerStarted","Data":"597eb9fe3cd4c1b7318162ebd5708ad4e355b4ec1aefd3e29f82b2aa51496588"} Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.055226 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b82dc59-a470-4665-8271-3bbcfecb73f1-cert\") pod \"openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk\" (UID: \"3b82dc59-a470-4665-8271-3bbcfecb73f1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.057351 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" event={"ID":"f31accc0-70b7-4014-ac71-679dc729ed80","Type":"ContainerStarted","Data":"22fe015f8681cab9f2fbda6f95201a957b87fd7ff8af4ba50f06f62221e418b5"} Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.060907 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" event={"ID":"4a51f724-a3be-4ef5-acd6-84891873147b","Type":"ContainerStarted","Data":"c117142446b30444a34ad31a89253003555beebcdad6509f58bdc23a35f3ad92"} Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.061564 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" event={"ID":"0718fce0-14e8-434b-be98-ef48ec6059f3","Type":"ContainerStarted","Data":"763bfe9906b180f25d44578c88a5206f2666d8c4038964d8a7416f439e32b927"} Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.134884 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx"] Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.196508 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc"] Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.227603 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn"] Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.308461 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.494602 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs"] Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.896614 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-cert\") pod \"openstack-operator-controller-manager-54798f4d5-64lpb\" (UID: \"cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b\") " pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.913579 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b-cert\") pod \"openstack-operator-controller-manager-54798f4d5-64lpb\" (UID: \"cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b\") " pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.919568 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:30 crc kubenswrapper[4805]: I1216 12:11:30.987863 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm"] Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.025376 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt"] Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.034669 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-cc776f956-smg8x"] Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.045218 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx"] Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.057954 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf"] Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.075523 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll"] Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.145390 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" event={"ID":"9b3aad50-49b1-43c0-84c9-15368e69abae","Type":"ContainerStarted","Data":"b0967b1ccf348091d6b769f90b4a5c378809b271af20658ab06b98dcb7d60713"} Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.148482 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs" event={"ID":"df737798-a34c-4142-88c2-592096b02f85","Type":"ContainerStarted","Data":"233bd22e2d2a3b7a86e0affb5d44197c2f86a6d321a73a1eafb898a7067c8e22"} Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.149894 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" event={"ID":"0e8bdc0b-046a-4513-9ed6-3350f94faea5","Type":"ContainerStarted","Data":"7ffec473493327334f49e5c62d12f2f48381255b63d8b175984ef724e0bf390e"} Dec 16 12:11:31 crc kubenswrapper[4805]: W1216 12:11:31.170271 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5581487f_dd20_4fb5_99b7_c6cfb197e548.slice/crio-748cc4b9deb3e47cd7519e41a8be4a485deb5f70c7ca775d976ad5f13ed1667d WatchSource:0}: Error finding container 748cc4b9deb3e47cd7519e41a8be4a485deb5f70c7ca775d976ad5f13ed1667d: Status 404 returned error can't find the container with id 748cc4b9deb3e47cd7519e41a8be4a485deb5f70c7ca775d976ad5f13ed1667d Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.172298 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn" event={"ID":"93f2c029-57dc-47fc-9c2e-18f2710ff53e","Type":"ContainerStarted","Data":"77e70d6373cd12a336abdd03fd2ae4f813e33a9cd424a331df3acaef0613e6f0"} Dec 16 12:11:31 crc kubenswrapper[4805]: W1216 12:11:31.185030 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57bf5f89_e14d_442f_8064_2c0ca66139c4.slice/crio-bb296fdf4b7706aed1bb8e5742b482c096954b924ffd0055524e0ba6a4781ab7 WatchSource:0}: Error finding container bb296fdf4b7706aed1bb8e5742b482c096954b924ffd0055524e0ba6a4781ab7: Status 404 returned error can't find the container with id bb296fdf4b7706aed1bb8e5742b482c096954b924ffd0055524e0ba6a4781ab7 Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.193588 4805 generic.go:334] "Generic (PLEG): container finished" podID="706f8077-edd1-4edf-95ae-9e9602b72758" containerID="b3138c5f62641a8f579038b222c805969cdf1bd7f6aa545145dd185f85d08f47" exitCode=0 Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.193700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhwx5" event={"ID":"706f8077-edd1-4edf-95ae-9e9602b72758","Type":"ContainerDied","Data":"b3138c5f62641a8f579038b222c805969cdf1bd7f6aa545145dd185f85d08f47"} Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.201863 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" event={"ID":"ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96","Type":"ContainerStarted","Data":"a65224db359430e0aada8eeb9cf92327ceb9bf3f51928174c74d6b9ced005080"} Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.208732 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" event={"ID":"81719820-96fa-418d-9d0b-18ba90027850","Type":"ContainerStarted","Data":"7597d653eacd7ed21d79656092969b9ad3c111e47e7d77c94a6319bab50afb3a"} Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.266414 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s"] Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.289083 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s"] Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.297622 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l"] Dec 16 12:11:31 crc kubenswrapper[4805]: W1216 12:11:31.347644 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39e02343_d3e2_4e57_b38e_1b275f3cb29d.slice/crio-a9e0b621b205a9beb04e793eedb2a515f53fd2e9267d4c45f3fad1e8bc0b35f8 WatchSource:0}: Error finding container a9e0b621b205a9beb04e793eedb2a515f53fd2e9267d4c45f3fad1e8bc0b35f8: Status 404 returned error can't find the container with id a9e0b621b205a9beb04e793eedb2a515f53fd2e9267d4c45f3fad1e8bc0b35f8 Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.443329 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq"] Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.478012 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn"] Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.528837 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk"] Dec 16 12:11:31 crc kubenswrapper[4805]: I1216 12:11:31.641713 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb"] Dec 16 12:11:31 crc kubenswrapper[4805]: E1216 12:11:31.651453 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bfb1e0635f87094bee949f00fea37cbc27b88c42a7cef1909e0b68e5abd185c7,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:18.0-fr4-latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:18.0-fr4-latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:18.0-fr4-latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmfk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk_openstack-operators(3b82dc59-a470-4665-8271-3bbcfecb73f1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.338301 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf" event={"ID":"57bf5f89-e14d-442f-8064-2c0ca66139c4","Type":"ContainerStarted","Data":"bb296fdf4b7706aed1bb8e5742b482c096954b924ffd0055524e0ba6a4781ab7"} Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.341332 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" event={"ID":"bbd2ad8a-7239-4e25-bfbd-a009e826a337","Type":"ContainerStarted","Data":"30233421c40e6d0c5adcaa07500cab6faef61da042575714621437d7c0cf7313"} Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.365046 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" event={"ID":"3b82dc59-a470-4665-8271-3bbcfecb73f1","Type":"ContainerStarted","Data":"d5f92cc609092e3efc2d61753b5895104d68dc83f5ff7a9494f386b6193effeb"} Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.393580 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" event={"ID":"cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b","Type":"ContainerStarted","Data":"0f31483d6fa3da83ce77009502b187be94bd4425a2f798a05dc07d4a353b592e"} Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.393635 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" event={"ID":"cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b","Type":"ContainerStarted","Data":"53d5f5aedf29e3e7e0bcba33b8de07a758e9ab8ad68b2f12cb4d083fc678f7fd"} Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.409283 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn" event={"ID":"418e4014-2b81-4b93-a665-ca28d1e1d7ee","Type":"ContainerStarted","Data":"47c10ee58ac770b2b8d854cc6e66269614837c37a24eedf6b0b0608b40689afc"} Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.411879 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" event={"ID":"39e02343-d3e2-4e57-b38e-1b275f3cb29d","Type":"ContainerStarted","Data":"a9e0b621b205a9beb04e793eedb2a515f53fd2e9267d4c45f3fad1e8bc0b35f8"} Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.429167 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt" event={"ID":"5581487f-dd20-4fb5-99b7-c6cfb197e548","Type":"ContainerStarted","Data":"748cc4b9deb3e47cd7519e41a8be4a485deb5f70c7ca775d976ad5f13ed1667d"} Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.453211 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" event={"ID":"68475cd9-8ddd-44c5-ae7e-446bc92bb188","Type":"ContainerStarted","Data":"30c300bfba5a48e10f4ce3aef4cff444691ac11acf422ffbf688f0e3a0f60c2a"} Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.473193 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" event={"ID":"e817150a-4845-4d56-8dd0-229394b946db","Type":"ContainerStarted","Data":"1f2528796b498c0c78056cd0471fdbbf9337313ebe1b02b5aeeb4e9bd37af693"} Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.483580 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" event={"ID":"f5b07707-f2f4-4664-9522-268f8ee833db","Type":"ContainerStarted","Data":"1dd3304858f03a088a292328d0f3182b3632ad0ea18ec8860acd5eb7cb0f0bfb"} Dec 16 12:11:32 crc kubenswrapper[4805]: I1216 12:11:32.489046 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" event={"ID":"b885ab69-dc83-439c-9040-09fc3d238093","Type":"ContainerStarted","Data":"d8203d2d53b429ea72629213f71937a54c8a7e47c384cf7dfe97020e8921dcb5"} Dec 16 12:11:32 crc kubenswrapper[4805]: E1216 12:11:32.662797 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" podUID="3b82dc59-a470-4665-8271-3bbcfecb73f1" Dec 16 12:11:33 crc kubenswrapper[4805]: I1216 12:11:33.513996 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" event={"ID":"cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b","Type":"ContainerStarted","Data":"7faa6c47a66223046057062c0476704ff64873e3e06ac0df476598725126ea69"} Dec 16 12:11:33 crc kubenswrapper[4805]: I1216 12:11:33.519436 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:33 crc kubenswrapper[4805]: I1216 12:11:33.540795 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" event={"ID":"3b82dc59-a470-4665-8271-3bbcfecb73f1","Type":"ContainerStarted","Data":"1da6bcca85f4004edfaa625e0e6945ad231219d709739b3d716a12c18cf9fe2b"} Dec 16 12:11:33 crc kubenswrapper[4805]: E1216 12:11:33.544962 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bfb1e0635f87094bee949f00fea37cbc27b88c42a7cef1909e0b68e5abd185c7\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" podUID="3b82dc59-a470-4665-8271-3bbcfecb73f1" Dec 16 12:11:33 crc kubenswrapper[4805]: I1216 12:11:33.556569 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" podStartSLOduration=5.556541452 podStartE2EDuration="5.556541452s" podCreationTimestamp="2025-12-16 12:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:11:33.5477315 +0000 UTC m=+967.265989325" watchObservedRunningTime="2025-12-16 12:11:33.556541452 +0000 UTC m=+967.274799267" Dec 16 12:11:34 crc kubenswrapper[4805]: I1216 12:11:34.575897 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhwx5" event={"ID":"706f8077-edd1-4edf-95ae-9e9602b72758","Type":"ContainerStarted","Data":"6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0"} Dec 16 12:11:34 crc kubenswrapper[4805]: E1216 12:11:34.579365 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bfb1e0635f87094bee949f00fea37cbc27b88c42a7cef1909e0b68e5abd185c7\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" podUID="3b82dc59-a470-4665-8271-3bbcfecb73f1" Dec 16 12:11:34 crc kubenswrapper[4805]: I1216 12:11:34.636620 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qhwx5" podStartSLOduration=4.223233286 podStartE2EDuration="9.636595505s" podCreationTimestamp="2025-12-16 12:11:25 +0000 UTC" firstStartedPulling="2025-12-16 12:11:27.960928889 +0000 UTC m=+961.679186694" lastFinishedPulling="2025-12-16 12:11:33.374291108 +0000 UTC m=+967.092548913" observedRunningTime="2025-12-16 12:11:34.620641958 +0000 UTC m=+968.338899783" watchObservedRunningTime="2025-12-16 12:11:34.636595505 +0000 UTC m=+968.354853320" Dec 16 12:11:35 crc kubenswrapper[4805]: I1216 12:11:35.956396 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:35 crc kubenswrapper[4805]: I1216 12:11:35.957209 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:37 crc kubenswrapper[4805]: I1216 12:11:37.058222 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qhwx5" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" containerName="registry-server" probeResult="failure" output=< Dec 16 12:11:37 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 12:11:37 crc kubenswrapper[4805]: > Dec 16 12:11:40 crc kubenswrapper[4805]: I1216 12:11:40.926769 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-54798f4d5-64lpb" Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.281514 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t5zrz"] Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.284079 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.308017 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5zrz"] Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.405742 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-catalog-content\") pod \"certified-operators-t5zrz\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.405799 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwjbb\" (UniqueName: \"kubernetes.io/projected/da19c238-d2cb-4862-a08d-9b73f2b8a2af-kube-api-access-jwjbb\") pod \"certified-operators-t5zrz\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.405835 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-utilities\") pod \"certified-operators-t5zrz\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.507234 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-catalog-content\") pod \"certified-operators-t5zrz\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.507292 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwjbb\" (UniqueName: \"kubernetes.io/projected/da19c238-d2cb-4862-a08d-9b73f2b8a2af-kube-api-access-jwjbb\") pod \"certified-operators-t5zrz\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.507322 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-utilities\") pod \"certified-operators-t5zrz\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.507891 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-utilities\") pod \"certified-operators-t5zrz\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.508046 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-catalog-content\") pod \"certified-operators-t5zrz\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.536089 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwjbb\" (UniqueName: \"kubernetes.io/projected/da19c238-d2cb-4862-a08d-9b73f2b8a2af-kube-api-access-jwjbb\") pod \"certified-operators-t5zrz\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:11:44 crc kubenswrapper[4805]: I1216 12:11:44.601809 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:11:46 crc kubenswrapper[4805]: I1216 12:11:45.999935 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:46 crc kubenswrapper[4805]: I1216 12:11:46.057133 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:11:47 crc kubenswrapper[4805]: I1216 12:11:47.658058 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhwx5"] Dec 16 12:11:47 crc kubenswrapper[4805]: I1216 12:11:47.813903 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qhwx5" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" containerName="registry-server" containerID="cri-o://6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0" gracePeriod=2 Dec 16 12:11:48 crc kubenswrapper[4805]: I1216 12:11:48.826787 4805 generic.go:334] "Generic (PLEG): container finished" podID="706f8077-edd1-4edf-95ae-9e9602b72758" containerID="6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0" exitCode=0 Dec 16 12:11:48 crc kubenswrapper[4805]: I1216 12:11:48.826880 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhwx5" event={"ID":"706f8077-edd1-4edf-95ae-9e9602b72758","Type":"ContainerDied","Data":"6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0"} Dec 16 12:11:52 crc kubenswrapper[4805]: E1216 12:11:52.532590 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:13002ade744a50b84f3e9e793e68a3998be0d90fe877520fbd60257309931d7d" Dec 16 12:11:52 crc kubenswrapper[4805]: E1216 12:11:52.533019 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:13002ade744a50b84f3e9e793e68a3998be0d90fe877520fbd60257309931d7d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqhsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6bc5b9c47-tdd6s_openstack-operators(f5b07707-f2f4-4664-9522-268f8ee833db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:53 crc kubenswrapper[4805]: E1216 12:11:53.085010 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:03833d5c6982b42c836787ee1863f5f73e20dce26a154171de6c0cf4712938b5" Dec 16 12:11:53 crc kubenswrapper[4805]: E1216 12:11:53.085524 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:03833d5c6982b42c836787ee1863f5f73e20dce26a154171de6c0cf4712938b5,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nx9l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-bb565c8dd-5gtrz_openstack-operators(81b6ebe6-984f-4ecc-9d75-ac78097f7af2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:53 crc kubenswrapper[4805]: E1216 12:11:53.896283 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e2b7b9bdbf93b2ff7012cd2af921ae43082fd3eb036d884f13292c2e56f505c" Dec 16 12:11:53 crc kubenswrapper[4805]: E1216 12:11:53.896439 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e2b7b9bdbf93b2ff7012cd2af921ae43082fd3eb036d884f13292c2e56f505c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbft9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-54fd9dc4b5-tszdx_openstack-operators(0718fce0-14e8-434b-be98-ef48ec6059f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:54 crc kubenswrapper[4805]: E1216 12:11:54.415620 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 16 12:11:54 crc kubenswrapper[4805]: E1216 12:11:54.415812 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vbg9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn_openstack-operators(418e4014-2b81-4b93-a665-ca28d1e1d7ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:54 crc kubenswrapper[4805]: E1216 12:11:54.417081 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn" podUID="418e4014-2b81-4b93-a665-ca28d1e1d7ee" Dec 16 12:11:54 crc kubenswrapper[4805]: E1216 12:11:54.878115 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn" podUID="418e4014-2b81-4b93-a665-ca28d1e1d7ee" Dec 16 12:11:54 crc kubenswrapper[4805]: E1216 12:11:54.976413 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:02bd753e9f2a1642153fdb8a3e98087f42540eed434d419677c9f891bbee3961" Dec 16 12:11:54 crc kubenswrapper[4805]: E1216 12:11:54.976626 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:02bd753e9f2a1642153fdb8a3e98087f42540eed434d419677c9f891bbee3961,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rc7cx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-7b45cd6d68-g5msx_openstack-operators(37dfd47c-0789-4054-a4c7-37cff4d15b15): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:55 crc kubenswrapper[4805]: E1216 12:11:55.467658 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:882c32b0217c8fc8014f4c8aa8aed81f002e38f37bfd0d379398df218517982e" Dec 16 12:11:55 crc kubenswrapper[4805]: E1216 12:11:55.468106 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:882c32b0217c8fc8014f4c8aa8aed81f002e38f37bfd0d379398df218517982e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8pqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5847f67c56-pp6mh_openstack-operators(b738fc79-2b52-4759-a3ee-72e0946df392): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:55 crc kubenswrapper[4805]: E1216 12:11:55.958544 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0 is running failed: container process not found" containerID="6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:11:55 crc kubenswrapper[4805]: E1216 12:11:55.959049 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0 is running failed: container process not found" containerID="6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:11:55 crc kubenswrapper[4805]: E1216 12:11:55.959481 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0 is running failed: container process not found" containerID="6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:11:55 crc kubenswrapper[4805]: E1216 12:11:55.959510 4805 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-qhwx5" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" containerName="registry-server" Dec 16 12:11:56 crc kubenswrapper[4805]: E1216 12:11:56.076928 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:de8884167c9a7ccc8fbe469fae984de54625a1e2fe782e36356e1ff9cd1774dd" Dec 16 12:11:56 crc kubenswrapper[4805]: E1216 12:11:56.077129 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:de8884167c9a7ccc8fbe469fae984de54625a1e2fe782e36356e1ff9cd1774dd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zs9rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7cc599445b-b76nx_openstack-operators(0e8bdc0b-046a-4513-9ed6-3350f94faea5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:56 crc kubenswrapper[4805]: E1216 12:11:56.850775 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:5b7d3d7e5bdd0f7c2ad742990da2488e576cf0ea6dee2e3245192a89cc959096" Dec 16 12:11:56 crc kubenswrapper[4805]: E1216 12:11:56.851066 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:5b7d3d7e5bdd0f7c2ad742990da2488e576cf0ea6dee2e3245192a89cc959096,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xzxnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-cc776f956-smg8x_openstack-operators(9b3aad50-49b1-43c0-84c9-15368e69abae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:57 crc kubenswrapper[4805]: I1216 12:11:57.071167 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:11:57 crc kubenswrapper[4805]: I1216 12:11:57.071227 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:11:57 crc kubenswrapper[4805]: I1216 12:11:57.071280 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:11:57 crc kubenswrapper[4805]: I1216 12:11:57.071973 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3092c2527ab0dcd7a9a9a61613b2265defab594c8cd9fbda9395f115e5c0fc6"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:11:57 crc kubenswrapper[4805]: I1216 12:11:57.072067 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://d3092c2527ab0dcd7a9a9a61613b2265defab594c8cd9fbda9395f115e5c0fc6" gracePeriod=600 Dec 16 12:11:57 crc kubenswrapper[4805]: E1216 12:11:57.456226 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:c249bb06fd416110fd3e6a44337b65c775258756a9597002ad24704e17dd1fd9" Dec 16 12:11:57 crc kubenswrapper[4805]: E1216 12:11:57.456456 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:c249bb06fd416110fd3e6a44337b65c775258756a9597002ad24704e17dd1fd9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-98p6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-658bc5c8c5-wlr8s_openstack-operators(39e02343-d3e2-4e57-b38e-1b275f3cb29d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:57 crc kubenswrapper[4805]: I1216 12:11:57.895929 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="d3092c2527ab0dcd7a9a9a61613b2265defab594c8cd9fbda9395f115e5c0fc6" exitCode=0 Dec 16 12:11:57 crc kubenswrapper[4805]: I1216 12:11:57.895991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"d3092c2527ab0dcd7a9a9a61613b2265defab594c8cd9fbda9395f115e5c0fc6"} Dec 16 12:11:57 crc kubenswrapper[4805]: I1216 12:11:57.896027 4805 scope.go:117] "RemoveContainer" containerID="92251edb6d76688dd19f1388e5267454cafee90ea7ff5bb9b248b2631cfd26c8" Dec 16 12:11:57 crc kubenswrapper[4805]: E1216 12:11:57.974993 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:92b0c00727fb91c13331b1fa908252ad17e5f7f0050aee0f3cf988b5d2f61cbd" Dec 16 12:11:57 crc kubenswrapper[4805]: E1216 12:11:57.975287 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:92b0c00727fb91c13331b1fa908252ad17e5f7f0050aee0f3cf988b5d2f61cbd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgfn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-85d55b5858-4gk2l_openstack-operators(e817150a-4845-4d56-8dd0-229394b946db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:58 crc kubenswrapper[4805]: E1216 12:11:58.533637 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:d6a3d956e8dada1d7da372b532f955e6310002527667e24b08220c65956110af" Dec 16 12:11:58 crc kubenswrapper[4805]: E1216 12:11:58.533822 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:d6a3d956e8dada1d7da372b532f955e6310002527667e24b08220c65956110af,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmk5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-669b58f65-782cb_openstack-operators(f31accc0-70b7-4014-ac71-679dc729ed80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:58 crc kubenswrapper[4805]: E1216 12:11:58.912896 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:b9e09dbcf7f70960e90ecbb8b31bbb7acf141fc4975f69e37482df2bd0ea2773" Dec 16 12:11:58 crc kubenswrapper[4805]: E1216 12:11:58.913810 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:b9e09dbcf7f70960e90ecbb8b31bbb7acf141fc4975f69e37482df2bd0ea2773,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbl77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5d79c6465c-nldwq_openstack-operators(68475cd9-8ddd-44c5-ae7e-446bc92bb188): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:59 crc kubenswrapper[4805]: E1216 12:11:59.416890 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:ef7fc46a9e5c1e9d4bbe18bca843ea081d54553f025b76b68127e49121305762" Dec 16 12:11:59 crc kubenswrapper[4805]: E1216 12:11:59.417126 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:ef7fc46a9e5c1e9d4bbe18bca843ea081d54553f025b76b68127e49121305762,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-knvsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7f764db9b-hq9vm_openstack-operators(ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:11:59 crc kubenswrapper[4805]: E1216 12:11:59.821738 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:34f61eaefddfe86639c90961af867584c4146801c7d35b902d07af55432fb6b0" Dec 16 12:11:59 crc kubenswrapper[4805]: E1216 12:11:59.822011 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:34f61eaefddfe86639c90961af867584c4146801c7d35b902d07af55432fb6b0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kvkxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-6985cf78fb-rfvkc_openstack-operators(81719820-96fa-418d-9d0b-18ba90027850): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:12:00 crc kubenswrapper[4805]: E1216 12:12:00.432658 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:b69a948d18394c4028a2957201d4dd83f17aea5dc07492fb797f871eeb0091df" Dec 16 12:12:00 crc kubenswrapper[4805]: E1216 12:12:00.432917 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:b69a948d18394c4028a2957201d4dd83f17aea5dc07492fb797f871eeb0091df,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pcszj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-69977bdf55-t27s6_openstack-operators(4a51f724-a3be-4ef5-acd6-84891873147b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:12:01 crc kubenswrapper[4805]: E1216 12:12:01.936452 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:177bba84f71a0b2cfd00a31147aa349fe4c25c83d2b9df7563b5dd5cfeafc161" Dec 16 12:12:01 crc kubenswrapper[4805]: E1216 12:12:01.936915 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:177bba84f71a0b2cfd00a31147aa349fe4c25c83d2b9df7563b5dd5cfeafc161,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9xwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b444986fd-5mnhx_openstack-operators(b885ab69-dc83-439c-9040-09fc3d238093): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:12:05 crc kubenswrapper[4805]: E1216 12:12:05.956719 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0 is running failed: container process not found" containerID="6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:12:05 crc kubenswrapper[4805]: E1216 12:12:05.958119 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0 is running failed: container process not found" containerID="6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:12:05 crc kubenswrapper[4805]: E1216 12:12:05.958882 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0 is running failed: container process not found" containerID="6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:12:05 crc kubenswrapper[4805]: E1216 12:12:05.958971 4805 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-qhwx5" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" containerName="registry-server" Dec 16 12:12:13 crc kubenswrapper[4805]: I1216 12:12:13.269634 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:12:13 crc kubenswrapper[4805]: I1216 12:12:13.347347 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-catalog-content\") pod \"706f8077-edd1-4edf-95ae-9e9602b72758\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " Dec 16 12:12:13 crc kubenswrapper[4805]: I1216 12:12:13.347839 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-utilities\") pod \"706f8077-edd1-4edf-95ae-9e9602b72758\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " Dec 16 12:12:13 crc kubenswrapper[4805]: I1216 12:12:13.348049 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2d52\" (UniqueName: \"kubernetes.io/projected/706f8077-edd1-4edf-95ae-9e9602b72758-kube-api-access-c2d52\") pod \"706f8077-edd1-4edf-95ae-9e9602b72758\" (UID: \"706f8077-edd1-4edf-95ae-9e9602b72758\") " Dec 16 12:12:13 crc kubenswrapper[4805]: I1216 12:12:13.348964 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-utilities" (OuterVolumeSpecName: "utilities") pod "706f8077-edd1-4edf-95ae-9e9602b72758" (UID: "706f8077-edd1-4edf-95ae-9e9602b72758"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:12:13 crc kubenswrapper[4805]: I1216 12:12:13.365158 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "706f8077-edd1-4edf-95ae-9e9602b72758" (UID: "706f8077-edd1-4edf-95ae-9e9602b72758"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:12:13 crc kubenswrapper[4805]: I1216 12:12:13.369738 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706f8077-edd1-4edf-95ae-9e9602b72758-kube-api-access-c2d52" (OuterVolumeSpecName: "kube-api-access-c2d52") pod "706f8077-edd1-4edf-95ae-9e9602b72758" (UID: "706f8077-edd1-4edf-95ae-9e9602b72758"). InnerVolumeSpecName "kube-api-access-c2d52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:12:13 crc kubenswrapper[4805]: E1216 12:12:13.410561 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:21ae046934b4ea85d21bdcc1d6b5bc7cb393e319b6dc7bea7ac1cc96aa4a599d" Dec 16 12:12:13 crc kubenswrapper[4805]: E1216 12:12:13.410727 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:21ae046934b4ea85d21bdcc1d6b5bc7cb393e319b6dc7bea7ac1cc96aa4a599d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-99l42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-7c9ff8845d-swkll_openstack-operators(bbd2ad8a-7239-4e25-bfbd-a009e826a337): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:12:13 crc kubenswrapper[4805]: I1216 12:12:13.449889 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:12:13 crc kubenswrapper[4805]: I1216 12:12:13.450211 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2d52\" (UniqueName: \"kubernetes.io/projected/706f8077-edd1-4edf-95ae-9e9602b72758-kube-api-access-c2d52\") on node \"crc\" DevicePath \"\"" Dec 16 12:12:13 crc kubenswrapper[4805]: I1216 12:12:13.450224 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/706f8077-edd1-4edf-95ae-9e9602b72758-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:12:13 crc kubenswrapper[4805]: I1216 12:12:13.823937 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5zrz"] Dec 16 12:12:13 crc kubenswrapper[4805]: E1216 12:12:13.999243 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" podUID="0718fce0-14e8-434b-be98-ef48ec6059f3" Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.010543 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" event={"ID":"f5b07707-f2f4-4664-9522-268f8ee833db","Type":"ContainerStarted","Data":"e2949d7ecd88a87c054c46726174eda22e81108cf3458d0135e576bc94380eaa"} Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.011118 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" podUID="81b6ebe6-984f-4ecc-9d75-ac78097f7af2" Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.011607 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" podUID="f5b07707-f2f4-4664-9522-268f8ee833db" Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.013522 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf" event={"ID":"57bf5f89-e14d-442f-8064-2c0ca66139c4","Type":"ContainerStarted","Data":"29cb238d077edcf26728db310a11be40869a87514704d42fc0c621b8098c0cae"} Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.016192 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" podUID="39e02343-d3e2-4e57-b38e-1b275f3cb29d" Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.017514 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" event={"ID":"0718fce0-14e8-434b-be98-ef48ec6059f3","Type":"ContainerStarted","Data":"8641b28189780953f5642aa041823392c7734f07241274038c4633c9908a4482"} Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.019914 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" podUID="b738fc79-2b52-4759-a3ee-72e0946df392" Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.025492 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" event={"ID":"81b6ebe6-984f-4ecc-9d75-ac78097f7af2","Type":"ContainerStarted","Data":"2a3886fa7b00fb5658dc3ef889bf66fddcc28a4040b0173f272fe05deac4090d"} Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.037214 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" event={"ID":"b738fc79-2b52-4759-a3ee-72e0946df392","Type":"ContainerStarted","Data":"0272e31f8336dc37375c1042f55bbe761872f3d9f0e45d6beccf0406b498d38c"} Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.040453 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs" event={"ID":"df737798-a34c-4142-88c2-592096b02f85","Type":"ContainerStarted","Data":"f3b873fd6b096fefe1205a4321919bc930bb370af557217620ea6059e1d0ecca"} Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.042033 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt" event={"ID":"5581487f-dd20-4fb5-99b7-c6cfb197e548","Type":"ContainerStarted","Data":"9c9180e27cfa842acc5324f78268a311fdfb50454b28bfdd6976a221cf49a9ec"} Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.046107 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" event={"ID":"0e8bdc0b-046a-4513-9ed6-3350f94faea5","Type":"ContainerStarted","Data":"46fdb78be8c72871279db9225491078860cda16ea53baf878863258478263bf8"} Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.047668 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" event={"ID":"39e02343-d3e2-4e57-b38e-1b275f3cb29d","Type":"ContainerStarted","Data":"68ea924ba8ca3f474e1a746c8967eb177434c70fb34fc132a0ceaed462207522"} Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.050098 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" event={"ID":"37dfd47c-0789-4054-a4c7-37cff4d15b15","Type":"ContainerStarted","Data":"4b1daf9c98c54c6db4f170307de9fa61536a66bc6b46189ae6df700f4d371b34"} Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.052968 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhwx5" event={"ID":"706f8077-edd1-4edf-95ae-9e9602b72758","Type":"ContainerDied","Data":"8d5f0aff2523d79be37f7598dc8cb3fbb9a8e39cd71a92ad01b068573430c9f5"} Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.053039 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhwx5" Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.107389 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" podUID="37dfd47c-0789-4054-a4c7-37cff4d15b15" Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.107669 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" podUID="0e8bdc0b-046a-4513-9ed6-3350f94faea5" Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.109222 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" podUID="9b3aad50-49b1-43c0-84c9-15368e69abae" Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.143706 4805 scope.go:117] "RemoveContainer" containerID="6a8bf83d667c1250e95db6b67c55e111bf2d0da702e4896ad58268b56229f8e0" Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.178176 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhwx5"] Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.188001 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhwx5"] Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.271621 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" podUID="f31accc0-70b7-4014-ac71-679dc729ed80" Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.276438 4805 scope.go:117] "RemoveContainer" containerID="b3138c5f62641a8f579038b222c805969cdf1bd7f6aa545145dd185f85d08f47" Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.368259 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" podUID="b885ab69-dc83-439c-9040-09fc3d238093" Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.509834 4805 scope.go:117] "RemoveContainer" containerID="c69dce834f75a1ff830390a7800570676913c0613d0e7b9e19ce5b756783f0c9" Dec 16 12:12:14 crc kubenswrapper[4805]: I1216 12:12:14.545567 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" path="/var/lib/kubelet/pods/706f8077-edd1-4edf-95ae-9e9602b72758/volumes" Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.669641 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" podUID="81719820-96fa-418d-9d0b-18ba90027850" Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.877898 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" podUID="e817150a-4845-4d56-8dd0-229394b946db" Dec 16 12:12:14 crc kubenswrapper[4805]: E1216 12:12:14.990227 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" podUID="4a51f724-a3be-4ef5-acd6-84891873147b" Dec 16 12:12:15 crc kubenswrapper[4805]: E1216 12:12:15.045097 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" podUID="68475cd9-8ddd-44c5-ae7e-446bc92bb188" Dec 16 12:12:15 crc kubenswrapper[4805]: I1216 12:12:15.071019 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" event={"ID":"81719820-96fa-418d-9d0b-18ba90027850","Type":"ContainerStarted","Data":"9b4db612b21cb270d83df49d8af102c484cb3c1150f56afb157abf5ca750fd3f"} Dec 16 12:12:15 crc kubenswrapper[4805]: E1216 12:12:15.082471 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" podUID="bbd2ad8a-7239-4e25-bfbd-a009e826a337" Dec 16 12:12:15 crc kubenswrapper[4805]: I1216 12:12:15.082601 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" event={"ID":"b885ab69-dc83-439c-9040-09fc3d238093","Type":"ContainerStarted","Data":"ef54d772f69676d5a8ffbf6e1d342ef2dca475d2a045e8cdc01f4c2213d211ca"} Dec 16 12:12:15 crc kubenswrapper[4805]: I1216 12:12:15.090117 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" event={"ID":"f31accc0-70b7-4014-ac71-679dc729ed80","Type":"ContainerStarted","Data":"db0af92563fff86eaf38d52518d462969456c933066d99489d986665a56539ac"} Dec 16 12:12:15 crc kubenswrapper[4805]: I1216 12:12:15.104459 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn" event={"ID":"93f2c029-57dc-47fc-9c2e-18f2710ff53e","Type":"ContainerStarted","Data":"a2c29ea18d66cb3f221aa55aa518dd3d49715f32b8172209f46915e162eab27f"} Dec 16 12:12:15 crc kubenswrapper[4805]: I1216 12:12:15.121056 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" event={"ID":"4a51f724-a3be-4ef5-acd6-84891873147b","Type":"ContainerStarted","Data":"9f5b912961224c736df57e3a4fe0cfd293260c8d513f18d1d131b8b643b95b28"} Dec 16 12:12:15 crc kubenswrapper[4805]: I1216 12:12:15.146482 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"49407ccdd3d008f1744b80cf9c050b56468ce45acde47e6aab9b00525b75e878"} Dec 16 12:12:15 crc kubenswrapper[4805]: I1216 12:12:15.164739 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" event={"ID":"9b3aad50-49b1-43c0-84c9-15368e69abae","Type":"ContainerStarted","Data":"12243e6dd25574d2d73e2aaaeeffc01ddd0cd9b3c2074ecc50f73f372c07c997"} Dec 16 12:12:15 crc kubenswrapper[4805]: I1216 12:12:15.174230 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5zrz" event={"ID":"da19c238-d2cb-4862-a08d-9b73f2b8a2af","Type":"ContainerStarted","Data":"91df081801ec50ee50e93053739809bb425326471dc9671c07c9409db9f71e46"} Dec 16 12:12:15 crc kubenswrapper[4805]: I1216 12:12:15.186382 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" event={"ID":"68475cd9-8ddd-44c5-ae7e-446bc92bb188","Type":"ContainerStarted","Data":"d92e39014500d32f2300c420fb90240ab27fbfcbbe61c2e5d97a816794d57a23"} Dec 16 12:12:15 crc kubenswrapper[4805]: I1216 12:12:15.209279 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" event={"ID":"e817150a-4845-4d56-8dd0-229394b946db","Type":"ContainerStarted","Data":"2276a5efb28453e35f72d9c59d049eb326932deb2b84d868396d6d1afbd66089"} Dec 16 12:12:15 crc kubenswrapper[4805]: E1216 12:12:15.303537 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" podUID="ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.235722 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt" event={"ID":"5581487f-dd20-4fb5-99b7-c6cfb197e548","Type":"ContainerStarted","Data":"166061c8735330a5e60356196d7a42380ca15e3fd1976aa0bfed2ac26d4a29c5"} Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.236478 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.240426 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf" event={"ID":"57bf5f89-e14d-442f-8064-2c0ca66139c4","Type":"ContainerStarted","Data":"df9b70c557c5b0acab7bf223131dc745bf558b46d03b8917fafe3d6707e5fa46"} Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.241236 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.246567 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" event={"ID":"ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96","Type":"ContainerStarted","Data":"a83ea11e7d967f370a634888d864f04f377ddb9cc4599bcf657c7c5cc9437aee"} Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.253013 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" event={"ID":"bbd2ad8a-7239-4e25-bfbd-a009e826a337","Type":"ContainerStarted","Data":"bee4050c0e9be121a305173e131c89bb2f9245695699e2b84d685f299e6d9b90"} Dec 16 12:12:16 crc kubenswrapper[4805]: E1216 12:12:16.253941 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:21ae046934b4ea85d21bdcc1d6b5bc7cb393e319b6dc7bea7ac1cc96aa4a599d\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" podUID="bbd2ad8a-7239-4e25-bfbd-a009e826a337" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.264897 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs" event={"ID":"df737798-a34c-4142-88c2-592096b02f85","Type":"ContainerStarted","Data":"077c66f8c34cc069c2cb133507e630928af04bac35e2fcc53b977d75584ede5c"} Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.265043 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.267019 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" event={"ID":"81b6ebe6-984f-4ecc-9d75-ac78097f7af2","Type":"ContainerStarted","Data":"9ea91f45e25ae7bbe0551618a7b29dcde25159fd9c4183470648ca273ef95dfb"} Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.267236 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.271738 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" event={"ID":"b738fc79-2b52-4759-a3ee-72e0946df392","Type":"ContainerStarted","Data":"f5bc599df383e142c2d858df7858306118b43b6df084f8b15fabfb3e8c7a47e8"} Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.272340 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.277819 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn" event={"ID":"418e4014-2b81-4b93-a665-ca28d1e1d7ee","Type":"ContainerStarted","Data":"5f953efb538762f421d7275bc931999d36799998582425f28adbc090f8ce8e26"} Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.278741 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt" podStartSLOduration=26.59515006 podStartE2EDuration="49.278722207s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.189623249 +0000 UTC m=+964.907881054" lastFinishedPulling="2025-12-16 12:11:53.873195396 +0000 UTC m=+987.591453201" observedRunningTime="2025-12-16 12:12:16.270057209 +0000 UTC m=+1009.988315014" watchObservedRunningTime="2025-12-16 12:12:16.278722207 +0000 UTC m=+1009.996980022" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.288267 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn" event={"ID":"93f2c029-57dc-47fc-9c2e-18f2710ff53e","Type":"ContainerStarted","Data":"c091afe487d1b1b13ea6ad200fae3dfcc9aa1c05082996bb2788f1a89d69bd27"} Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.288397 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.299934 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" event={"ID":"0718fce0-14e8-434b-be98-ef48ec6059f3","Type":"ContainerStarted","Data":"2f6f39cd90ee77973b186b930eec543fcb0b1d86218c53a24f2def91fe3d8dd2"} Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.300594 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.322058 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" event={"ID":"3b82dc59-a470-4665-8271-3bbcfecb73f1","Type":"ContainerStarted","Data":"34689778b20c27d4eb399b16ca4b01814280f36e87b8f9db8b8567fff6b5d44d"} Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.322793 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.328646 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5zrz" event={"ID":"da19c238-d2cb-4862-a08d-9b73f2b8a2af","Type":"ContainerStarted","Data":"cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5"} Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.367602 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" podStartSLOduration=3.675126661 podStartE2EDuration="49.367582865s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:29.559250139 +0000 UTC m=+963.277507944" lastFinishedPulling="2025-12-16 12:12:15.251706343 +0000 UTC m=+1008.969964148" observedRunningTime="2025-12-16 12:12:16.366737071 +0000 UTC m=+1010.084994866" watchObservedRunningTime="2025-12-16 12:12:16.367582865 +0000 UTC m=+1010.085840680" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.436436 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf" podStartSLOduration=25.195878593 podStartE2EDuration="49.436418869s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.221163433 +0000 UTC m=+964.939421238" lastFinishedPulling="2025-12-16 12:11:55.461703709 +0000 UTC m=+989.179961514" observedRunningTime="2025-12-16 12:12:16.4322916 +0000 UTC m=+1010.150549405" watchObservedRunningTime="2025-12-16 12:12:16.436418869 +0000 UTC m=+1010.154676684" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.457843 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs" podStartSLOduration=26.258801849 podStartE2EDuration="49.457827712s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:30.673352069 +0000 UTC m=+964.391609874" lastFinishedPulling="2025-12-16 12:11:53.872377932 +0000 UTC m=+987.590635737" observedRunningTime="2025-12-16 12:12:16.457271546 +0000 UTC m=+1010.175529351" watchObservedRunningTime="2025-12-16 12:12:16.457827712 +0000 UTC m=+1010.176085527" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.485356 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn" podStartSLOduration=25.894038852 podStartE2EDuration="49.485336961s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:30.281123514 +0000 UTC m=+963.999381319" lastFinishedPulling="2025-12-16 12:11:53.872421623 +0000 UTC m=+987.590679428" observedRunningTime="2025-12-16 12:12:16.480445471 +0000 UTC m=+1010.198703276" watchObservedRunningTime="2025-12-16 12:12:16.485336961 +0000 UTC m=+1010.203594776" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.554752 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" podStartSLOduration=7.7999277540000005 podStartE2EDuration="49.554734001s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.650798299 +0000 UTC m=+965.369056104" lastFinishedPulling="2025-12-16 12:12:13.405604546 +0000 UTC m=+1007.123862351" observedRunningTime="2025-12-16 12:12:16.533742689 +0000 UTC m=+1010.252000504" watchObservedRunningTime="2025-12-16 12:12:16.554734001 +0000 UTC m=+1010.272991816" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.567067 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" podStartSLOduration=3.874120107 podStartE2EDuration="49.567048624s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:29.559686152 +0000 UTC m=+963.277943957" lastFinishedPulling="2025-12-16 12:12:15.252614669 +0000 UTC m=+1008.970872474" observedRunningTime="2025-12-16 12:12:16.561584547 +0000 UTC m=+1010.279842362" watchObservedRunningTime="2025-12-16 12:12:16.567048624 +0000 UTC m=+1010.285306439" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.598949 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn" podStartSLOduration=5.961679122 podStartE2EDuration="48.598932988s" podCreationTimestamp="2025-12-16 12:11:28 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.650378717 +0000 UTC m=+965.368636522" lastFinishedPulling="2025-12-16 12:12:14.287632583 +0000 UTC m=+1008.005890388" observedRunningTime="2025-12-16 12:12:16.596869969 +0000 UTC m=+1010.315127794" watchObservedRunningTime="2025-12-16 12:12:16.598932988 +0000 UTC m=+1010.317190803" Dec 16 12:12:16 crc kubenswrapper[4805]: I1216 12:12:16.700070 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" podStartSLOduration=3.599074042 podStartE2EDuration="49.700047647s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:29.164291697 +0000 UTC m=+962.882549502" lastFinishedPulling="2025-12-16 12:12:15.265265302 +0000 UTC m=+1008.983523107" observedRunningTime="2025-12-16 12:12:16.690652607 +0000 UTC m=+1010.408910412" watchObservedRunningTime="2025-12-16 12:12:16.700047647 +0000 UTC m=+1010.418305482" Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.373082 4805 generic.go:334] "Generic (PLEG): container finished" podID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" containerID="cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5" exitCode=0 Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.373388 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5zrz" event={"ID":"da19c238-d2cb-4862-a08d-9b73f2b8a2af","Type":"ContainerDied","Data":"cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5"} Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.391864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" event={"ID":"39e02343-d3e2-4e57-b38e-1b275f3cb29d","Type":"ContainerStarted","Data":"a5d1533783aa2f915dd234d56b1b5e4ee10c2f3ef33ddc8d0f03733a99548b4d"} Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.392750 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.399621 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" event={"ID":"68475cd9-8ddd-44c5-ae7e-446bc92bb188","Type":"ContainerStarted","Data":"1245ecbb4dad1705bf9fe01b1217140e32d9f7ef98d52ee744ac26f25a91f2a4"} Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.400202 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.412282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" event={"ID":"81719820-96fa-418d-9d0b-18ba90027850","Type":"ContainerStarted","Data":"76a7ca502f52b909b406c80927d995423f64ea83f58e4d9e3e7bb629f5490f3c"} Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.412575 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.424291 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" event={"ID":"9b3aad50-49b1-43c0-84c9-15368e69abae","Type":"ContainerStarted","Data":"4608eb68562ca33bd435959f1c2e1498b29c4f908147dc9906ec28647cf57cad"} Dec 16 12:12:17 crc kubenswrapper[4805]: E1216 12:12:17.443401 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:21ae046934b4ea85d21bdcc1d6b5bc7cb393e319b6dc7bea7ac1cc96aa4a599d\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" podUID="bbd2ad8a-7239-4e25-bfbd-a009e826a337" Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.501024 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" podStartSLOduration=5.074450606 podStartE2EDuration="49.50099186s" podCreationTimestamp="2025-12-16 12:11:28 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.350613363 +0000 UTC m=+965.068871168" lastFinishedPulling="2025-12-16 12:12:15.777154617 +0000 UTC m=+1009.495412422" observedRunningTime="2025-12-16 12:12:17.448625629 +0000 UTC m=+1011.166883434" watchObservedRunningTime="2025-12-16 12:12:17.50099186 +0000 UTC m=+1011.219249675" Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.613943 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" podStartSLOduration=4.380159749 podStartE2EDuration="49.613917147s" podCreationTimestamp="2025-12-16 12:11:28 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.482331619 +0000 UTC m=+965.200589424" lastFinishedPulling="2025-12-16 12:12:16.716089017 +0000 UTC m=+1010.434346822" observedRunningTime="2025-12-16 12:12:17.500377442 +0000 UTC m=+1011.218635247" watchObservedRunningTime="2025-12-16 12:12:17.613917147 +0000 UTC m=+1011.332174962" Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.709064 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" podStartSLOduration=4.478371799 podStartE2EDuration="50.709046235s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:30.281273638 +0000 UTC m=+963.999531443" lastFinishedPulling="2025-12-16 12:12:16.511948074 +0000 UTC m=+1010.230205879" observedRunningTime="2025-12-16 12:12:17.708631533 +0000 UTC m=+1011.426889338" watchObservedRunningTime="2025-12-16 12:12:17.709046235 +0000 UTC m=+1011.427304050" Dec 16 12:12:17 crc kubenswrapper[4805]: I1216 12:12:17.744434 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" podStartSLOduration=4.974820891 podStartE2EDuration="50.744413469s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.092806353 +0000 UTC m=+964.811064158" lastFinishedPulling="2025-12-16 12:12:16.862398931 +0000 UTC m=+1010.580656736" observedRunningTime="2025-12-16 12:12:17.74096679 +0000 UTC m=+1011.459224625" watchObservedRunningTime="2025-12-16 12:12:17.744413469 +0000 UTC m=+1011.462671294" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.289537 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-58879495c-pqrdf" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.381358 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-p6wbs" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.465828 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" event={"ID":"0e8bdc0b-046a-4513-9ed6-3350f94faea5","Type":"ContainerStarted","Data":"565a07035007b0aaf3b5f964b9af49ff91f53131d29447045fd8a64c4a690172"} Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.468487 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.489589 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" event={"ID":"37dfd47c-0789-4054-a4c7-37cff4d15b15","Type":"ContainerStarted","Data":"7e77cbdd69fe60e48630c9d923f243c05da0c24e34391c17e35f39f77cd49313"} Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.490364 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.501276 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" event={"ID":"4a51f724-a3be-4ef5-acd6-84891873147b","Type":"ContainerStarted","Data":"1179142057ce302817225973cdcdcb7a4c73759776ba99cf59a9683d4e4a9238"} Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.502286 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.509379 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" podStartSLOduration=4.491230108 podStartE2EDuration="51.509359809s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:30.172241703 +0000 UTC m=+963.890499508" lastFinishedPulling="2025-12-16 12:12:17.190371404 +0000 UTC m=+1010.908629209" observedRunningTime="2025-12-16 12:12:18.503776859 +0000 UTC m=+1012.222034674" watchObservedRunningTime="2025-12-16 12:12:18.509359809 +0000 UTC m=+1012.227617634" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.513497 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.518563 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.637779 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" podStartSLOduration=4.327669769 podStartE2EDuration="51.6377609s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:29.797319745 +0000 UTC m=+963.515577550" lastFinishedPulling="2025-12-16 12:12:17.107410876 +0000 UTC m=+1010.825668681" observedRunningTime="2025-12-16 12:12:18.546901955 +0000 UTC m=+1012.265159780" watchObservedRunningTime="2025-12-16 12:12:18.6377609 +0000 UTC m=+1012.356018715" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.664311 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-l26rt" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.704002 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" podStartSLOduration=4.034372501 podStartE2EDuration="51.703978249s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:29.43786938 +0000 UTC m=+963.156127185" lastFinishedPulling="2025-12-16 12:12:17.107475128 +0000 UTC m=+1010.825732933" observedRunningTime="2025-12-16 12:12:18.684109969 +0000 UTC m=+1012.402367804" watchObservedRunningTime="2025-12-16 12:12:18.703978249 +0000 UTC m=+1012.422236074" Dec 16 12:12:18 crc kubenswrapper[4805]: I1216 12:12:18.749774 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" podStartSLOduration=5.064756939 podStartE2EDuration="51.749755871s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.094945325 +0000 UTC m=+964.813203130" lastFinishedPulling="2025-12-16 12:12:17.779944257 +0000 UTC m=+1011.498202062" observedRunningTime="2025-12-16 12:12:18.737046537 +0000 UTC m=+1012.455304342" watchObservedRunningTime="2025-12-16 12:12:18.749755871 +0000 UTC m=+1012.468013686" Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.522101 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" event={"ID":"ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96","Type":"ContainerStarted","Data":"6ff3c1cf522444120546922c7b4e2163cef49d162419005bb7384b2e90efce03"} Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.524946 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" event={"ID":"b885ab69-dc83-439c-9040-09fc3d238093","Type":"ContainerStarted","Data":"299142aba2b2e2381ba33e36c22c767156aed825a55b5973ea2379e0caa509db"} Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.525108 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.527784 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5zrz" event={"ID":"da19c238-d2cb-4862-a08d-9b73f2b8a2af","Type":"ContainerStarted","Data":"fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55"} Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.530711 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" event={"ID":"f31accc0-70b7-4014-ac71-679dc729ed80","Type":"ContainerStarted","Data":"278125bfabb4dd3cfcf285c7310b0e7cbece871912cb79982c8fa821554da616"} Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.531355 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.533160 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" event={"ID":"e817150a-4845-4d56-8dd0-229394b946db","Type":"ContainerStarted","Data":"f08e4e52c55cdbe5026e6e7d600a3cf0f3892537672c4a48985ef32cbfb00e2b"} Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.533844 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.537352 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" event={"ID":"f5b07707-f2f4-4664-9522-268f8ee833db","Type":"ContainerStarted","Data":"a0fea4f0717025bdce70ee2931b09e849f46b9eafbf3f5d4527a853b8a61bf7e"} Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.612159 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" podStartSLOduration=5.644880941 podStartE2EDuration="52.612117225s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.129255748 +0000 UTC m=+964.847513553" lastFinishedPulling="2025-12-16 12:12:18.096492032 +0000 UTC m=+1011.814749837" observedRunningTime="2025-12-16 12:12:19.606555625 +0000 UTC m=+1013.324813430" watchObservedRunningTime="2025-12-16 12:12:19.612117225 +0000 UTC m=+1013.330375050" Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.807117 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" podStartSLOduration=7.031986777 podStartE2EDuration="52.807099585s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.426292413 +0000 UTC m=+965.144550208" lastFinishedPulling="2025-12-16 12:12:17.201405211 +0000 UTC m=+1010.919663016" observedRunningTime="2025-12-16 12:12:19.796503621 +0000 UTC m=+1013.514761446" watchObservedRunningTime="2025-12-16 12:12:19.807099585 +0000 UTC m=+1013.525357410" Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.813844 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" podStartSLOduration=7.039306107 podStartE2EDuration="52.813826478s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.42585212 +0000 UTC m=+965.144109925" lastFinishedPulling="2025-12-16 12:12:17.200372491 +0000 UTC m=+1010.918630296" observedRunningTime="2025-12-16 12:12:19.715831418 +0000 UTC m=+1013.434089223" watchObservedRunningTime="2025-12-16 12:12:19.813826478 +0000 UTC m=+1013.532084293" Dec 16 12:12:19 crc kubenswrapper[4805]: I1216 12:12:19.936027 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" podStartSLOduration=5.172081669 podStartE2EDuration="52.936009451s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:29.438618581 +0000 UTC m=+963.156876376" lastFinishedPulling="2025-12-16 12:12:17.202546353 +0000 UTC m=+1010.920804158" observedRunningTime="2025-12-16 12:12:19.93320076 +0000 UTC m=+1013.651458565" watchObservedRunningTime="2025-12-16 12:12:19.936009451 +0000 UTC m=+1013.654267266" Dec 16 12:12:20 crc kubenswrapper[4805]: I1216 12:12:20.322932 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk" Dec 16 12:12:21 crc kubenswrapper[4805]: I1216 12:12:21.554608 4805 generic.go:334] "Generic (PLEG): container finished" podID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" containerID="fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55" exitCode=0 Dec 16 12:12:21 crc kubenswrapper[4805]: I1216 12:12:21.556294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5zrz" event={"ID":"da19c238-d2cb-4862-a08d-9b73f2b8a2af","Type":"ContainerDied","Data":"fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55"} Dec 16 12:12:22 crc kubenswrapper[4805]: I1216 12:12:22.565066 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5zrz" event={"ID":"da19c238-d2cb-4862-a08d-9b73f2b8a2af","Type":"ContainerStarted","Data":"3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db"} Dec 16 12:12:22 crc kubenswrapper[4805]: I1216 12:12:22.584177 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t5zrz" podStartSLOduration=32.840654838 podStartE2EDuration="38.584156723s" podCreationTimestamp="2025-12-16 12:11:44 +0000 UTC" firstStartedPulling="2025-12-16 12:12:16.332047536 +0000 UTC m=+1010.050305341" lastFinishedPulling="2025-12-16 12:12:22.075549421 +0000 UTC m=+1015.793807226" observedRunningTime="2025-12-16 12:12:22.579999204 +0000 UTC m=+1016.298257029" watchObservedRunningTime="2025-12-16 12:12:22.584156723 +0000 UTC m=+1016.302414548" Dec 16 12:12:24 crc kubenswrapper[4805]: I1216 12:12:24.603001 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:12:24 crc kubenswrapper[4805]: I1216 12:12:24.603340 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:12:24 crc kubenswrapper[4805]: I1216 12:12:24.682103 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:12:27 crc kubenswrapper[4805]: I1216 12:12:27.477790 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-69977bdf55-t27s6" Dec 16 12:12:27 crc kubenswrapper[4805]: I1216 12:12:27.481157 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-669b58f65-782cb" Dec 16 12:12:27 crc kubenswrapper[4805]: I1216 12:12:27.509861 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5847f67c56-pp6mh" Dec 16 12:12:27 crc kubenswrapper[4805]: I1216 12:12:27.721231 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-bb565c8dd-5gtrz" Dec 16 12:12:27 crc kubenswrapper[4805]: I1216 12:12:27.817716 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-54fd9dc4b5-tszdx" Dec 16 12:12:27 crc kubenswrapper[4805]: I1216 12:12:27.972198 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6985cf78fb-rfvkc" Dec 16 12:12:27 crc kubenswrapper[4805]: I1216 12:12:27.990885 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-7b45cd6d68-g5msx" Dec 16 12:12:28 crc kubenswrapper[4805]: I1216 12:12:28.138735 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7f764db9b-hq9vm" Dec 16 12:12:28 crc kubenswrapper[4805]: I1216 12:12:28.169768 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7cc599445b-b76nx" Dec 16 12:12:28 crc kubenswrapper[4805]: I1216 12:12:28.242484 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-xhhsn" Dec 16 12:12:28 crc kubenswrapper[4805]: I1216 12:12:28.717067 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" Dec 16 12:12:28 crc kubenswrapper[4805]: I1216 12:12:28.814731 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" Dec 16 12:12:29 crc kubenswrapper[4805]: I1216 12:12:29.015184 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" Dec 16 12:12:29 crc kubenswrapper[4805]: I1216 12:12:29.017512 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6bc5b9c47-tdd6s" Dec 16 12:12:29 crc kubenswrapper[4805]: I1216 12:12:29.047582 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-658bc5c8c5-wlr8s" Dec 16 12:12:29 crc kubenswrapper[4805]: I1216 12:12:29.082581 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5d79c6465c-nldwq" Dec 16 12:12:29 crc kubenswrapper[4805]: I1216 12:12:29.437345 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-85d55b5858-4gk2l" Dec 16 12:12:31 crc kubenswrapper[4805]: I1216 12:12:31.524737 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 12:12:32 crc kubenswrapper[4805]: I1216 12:12:32.747073 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" event={"ID":"bbd2ad8a-7239-4e25-bfbd-a009e826a337","Type":"ContainerStarted","Data":"f580e77fbaf41bb071fa23d910b4ddbf14d62009f06f44ec9c2da377e9d65d03"} Dec 16 12:12:32 crc kubenswrapper[4805]: I1216 12:12:32.748532 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" Dec 16 12:12:32 crc kubenswrapper[4805]: I1216 12:12:32.794223 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" podStartSLOduration=4.7468769250000005 podStartE2EDuration="1m5.794197161s" podCreationTimestamp="2025-12-16 12:11:27 +0000 UTC" firstStartedPulling="2025-12-16 12:11:31.275310385 +0000 UTC m=+964.993568200" lastFinishedPulling="2025-12-16 12:12:32.322630631 +0000 UTC m=+1026.040888436" observedRunningTime="2025-12-16 12:12:32.788154628 +0000 UTC m=+1026.506412433" watchObservedRunningTime="2025-12-16 12:12:32.794197161 +0000 UTC m=+1026.512454986" Dec 16 12:12:34 crc kubenswrapper[4805]: I1216 12:12:34.809747 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:12:34 crc kubenswrapper[4805]: I1216 12:12:34.863868 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5zrz"] Dec 16 12:12:35 crc kubenswrapper[4805]: I1216 12:12:35.821782 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t5zrz" podUID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" containerName="registry-server" containerID="cri-o://3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db" gracePeriod=2 Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.414962 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.545785 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-catalog-content\") pod \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.545965 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-utilities\") pod \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.546097 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwjbb\" (UniqueName: \"kubernetes.io/projected/da19c238-d2cb-4862-a08d-9b73f2b8a2af-kube-api-access-jwjbb\") pod \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\" (UID: \"da19c238-d2cb-4862-a08d-9b73f2b8a2af\") " Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.547063 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-utilities" (OuterVolumeSpecName: "utilities") pod "da19c238-d2cb-4862-a08d-9b73f2b8a2af" (UID: "da19c238-d2cb-4862-a08d-9b73f2b8a2af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.552271 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da19c238-d2cb-4862-a08d-9b73f2b8a2af-kube-api-access-jwjbb" (OuterVolumeSpecName: "kube-api-access-jwjbb") pod "da19c238-d2cb-4862-a08d-9b73f2b8a2af" (UID: "da19c238-d2cb-4862-a08d-9b73f2b8a2af"). InnerVolumeSpecName "kube-api-access-jwjbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.594342 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da19c238-d2cb-4862-a08d-9b73f2b8a2af" (UID: "da19c238-d2cb-4862-a08d-9b73f2b8a2af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.648129 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwjbb\" (UniqueName: \"kubernetes.io/projected/da19c238-d2cb-4862-a08d-9b73f2b8a2af-kube-api-access-jwjbb\") on node \"crc\" DevicePath \"\"" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.648595 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.648607 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da19c238-d2cb-4862-a08d-9b73f2b8a2af-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.830465 4805 generic.go:334] "Generic (PLEG): container finished" podID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" containerID="3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db" exitCode=0 Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.830516 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5zrz" event={"ID":"da19c238-d2cb-4862-a08d-9b73f2b8a2af","Type":"ContainerDied","Data":"3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db"} Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.830577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5zrz" event={"ID":"da19c238-d2cb-4862-a08d-9b73f2b8a2af","Type":"ContainerDied","Data":"91df081801ec50ee50e93053739809bb425326471dc9671c07c9409db9f71e46"} Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.830601 4805 scope.go:117] "RemoveContainer" containerID="3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.831301 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5zrz" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.852588 4805 scope.go:117] "RemoveContainer" containerID="fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.868650 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5zrz"] Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.872885 4805 scope.go:117] "RemoveContainer" containerID="cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.880465 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t5zrz"] Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.906761 4805 scope.go:117] "RemoveContainer" containerID="3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db" Dec 16 12:12:36 crc kubenswrapper[4805]: E1216 12:12:36.907262 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db\": container with ID starting with 3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db not found: ID does not exist" containerID="3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.907317 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db"} err="failed to get container status \"3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db\": rpc error: code = NotFound desc = could not find container \"3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db\": container with ID starting with 3a0fddfbecf886bd29e637e7bf4cb1268c9e029e507c2c0879fba643f95f70db not found: ID does not exist" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.907350 4805 scope.go:117] "RemoveContainer" containerID="fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55" Dec 16 12:12:36 crc kubenswrapper[4805]: E1216 12:12:36.907883 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55\": container with ID starting with fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55 not found: ID does not exist" containerID="fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.907948 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55"} err="failed to get container status \"fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55\": rpc error: code = NotFound desc = could not find container \"fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55\": container with ID starting with fa5ad84c04b7bf0a8ab8bf1733ad1a73bef2d60527c94e58b903a74ebe89ea55 not found: ID does not exist" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.907978 4805 scope.go:117] "RemoveContainer" containerID="cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5" Dec 16 12:12:36 crc kubenswrapper[4805]: E1216 12:12:36.908340 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5\": container with ID starting with cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5 not found: ID does not exist" containerID="cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5" Dec 16 12:12:36 crc kubenswrapper[4805]: I1216 12:12:36.908376 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5"} err="failed to get container status \"cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5\": rpc error: code = NotFound desc = could not find container \"cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5\": container with ID starting with cf901763b34b656894e9711871f58051d3319e3d54264309a443163dbda8bdf5 not found: ID does not exist" Dec 16 12:12:38 crc kubenswrapper[4805]: I1216 12:12:38.533328 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" path="/var/lib/kubelet/pods/da19c238-d2cb-4862-a08d-9b73f2b8a2af/volumes" Dec 16 12:12:38 crc kubenswrapper[4805]: I1216 12:12:38.688044 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.537963 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-stv8h"] Dec 16 12:12:53 crc kubenswrapper[4805]: E1216 12:12:53.538862 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" containerName="extract-utilities" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.538892 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" containerName="extract-utilities" Dec 16 12:12:53 crc kubenswrapper[4805]: E1216 12:12:53.538941 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" containerName="extract-content" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.538951 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" containerName="extract-content" Dec 16 12:12:53 crc kubenswrapper[4805]: E1216 12:12:53.538971 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" containerName="registry-server" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.538981 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" containerName="registry-server" Dec 16 12:12:53 crc kubenswrapper[4805]: E1216 12:12:53.539006 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" containerName="extract-utilities" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.539012 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" containerName="extract-utilities" Dec 16 12:12:53 crc kubenswrapper[4805]: E1216 12:12:53.539033 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" containerName="extract-content" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.539165 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" containerName="extract-content" Dec 16 12:12:53 crc kubenswrapper[4805]: E1216 12:12:53.539180 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" containerName="registry-server" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.539193 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" containerName="registry-server" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.539403 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="706f8077-edd1-4edf-95ae-9e9602b72758" containerName="registry-server" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.539423 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="da19c238-d2cb-4862-a08d-9b73f2b8a2af" containerName="registry-server" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.540382 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.544478 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.544641 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.544669 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.544939 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qh79w" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.567620 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-stv8h"] Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.586286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedda822-568a-413e-a6aa-bb1a31c7ae40-config\") pod \"dnsmasq-dns-675f4bcbfc-stv8h\" (UID: \"fedda822-568a-413e-a6aa-bb1a31c7ae40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.586390 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdf4d\" (UniqueName: \"kubernetes.io/projected/fedda822-568a-413e-a6aa-bb1a31c7ae40-kube-api-access-vdf4d\") pod \"dnsmasq-dns-675f4bcbfc-stv8h\" (UID: \"fedda822-568a-413e-a6aa-bb1a31c7ae40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.651414 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7wbj"] Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.652862 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.659650 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.676174 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7wbj"] Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.688327 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdf4d\" (UniqueName: \"kubernetes.io/projected/fedda822-568a-413e-a6aa-bb1a31c7ae40-kube-api-access-vdf4d\") pod \"dnsmasq-dns-675f4bcbfc-stv8h\" (UID: \"fedda822-568a-413e-a6aa-bb1a31c7ae40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.688391 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g7wbj\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.688410 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcs5k\" (UniqueName: \"kubernetes.io/projected/b5bc9b8a-1d0a-4284-8f33-2000536199c8-kube-api-access-kcs5k\") pod \"dnsmasq-dns-78dd6ddcc-g7wbj\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.688462 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-config\") pod \"dnsmasq-dns-78dd6ddcc-g7wbj\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.688503 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedda822-568a-413e-a6aa-bb1a31c7ae40-config\") pod \"dnsmasq-dns-675f4bcbfc-stv8h\" (UID: \"fedda822-568a-413e-a6aa-bb1a31c7ae40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.689339 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedda822-568a-413e-a6aa-bb1a31c7ae40-config\") pod \"dnsmasq-dns-675f4bcbfc-stv8h\" (UID: \"fedda822-568a-413e-a6aa-bb1a31c7ae40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.717166 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdf4d\" (UniqueName: \"kubernetes.io/projected/fedda822-568a-413e-a6aa-bb1a31c7ae40-kube-api-access-vdf4d\") pod \"dnsmasq-dns-675f4bcbfc-stv8h\" (UID: \"fedda822-568a-413e-a6aa-bb1a31c7ae40\") " pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.789552 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-config\") pod \"dnsmasq-dns-78dd6ddcc-g7wbj\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.789731 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcs5k\" (UniqueName: \"kubernetes.io/projected/b5bc9b8a-1d0a-4284-8f33-2000536199c8-kube-api-access-kcs5k\") pod \"dnsmasq-dns-78dd6ddcc-g7wbj\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.789750 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g7wbj\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.790759 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g7wbj\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.790814 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-config\") pod \"dnsmasq-dns-78dd6ddcc-g7wbj\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.846447 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcs5k\" (UniqueName: \"kubernetes.io/projected/b5bc9b8a-1d0a-4284-8f33-2000536199c8-kube-api-access-kcs5k\") pod \"dnsmasq-dns-78dd6ddcc-g7wbj\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.858254 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" Dec 16 12:12:53 crc kubenswrapper[4805]: I1216 12:12:53.983312 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:12:54 crc kubenswrapper[4805]: I1216 12:12:54.455172 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-stv8h"] Dec 16 12:12:54 crc kubenswrapper[4805]: W1216 12:12:54.526023 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5bc9b8a_1d0a_4284_8f33_2000536199c8.slice/crio-952b3c0aa859113cf211d904e7501ae9ef13b6ed49ba8cc6831d3fcd81c8097d WatchSource:0}: Error finding container 952b3c0aa859113cf211d904e7501ae9ef13b6ed49ba8cc6831d3fcd81c8097d: Status 404 returned error can't find the container with id 952b3c0aa859113cf211d904e7501ae9ef13b6ed49ba8cc6831d3fcd81c8097d Dec 16 12:12:54 crc kubenswrapper[4805]: I1216 12:12:54.531989 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7wbj"] Dec 16 12:12:54 crc kubenswrapper[4805]: I1216 12:12:54.988259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" event={"ID":"b5bc9b8a-1d0a-4284-8f33-2000536199c8","Type":"ContainerStarted","Data":"952b3c0aa859113cf211d904e7501ae9ef13b6ed49ba8cc6831d3fcd81c8097d"} Dec 16 12:12:54 crc kubenswrapper[4805]: I1216 12:12:54.989340 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" event={"ID":"fedda822-568a-413e-a6aa-bb1a31c7ae40","Type":"ContainerStarted","Data":"f827d65d52b715f044b00cf799c26c02d5b57b607b9237852415f8d6b510e899"} Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.426213 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-stv8h"] Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.458922 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgp7n"] Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.464378 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.474059 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgp7n"] Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.529366 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-config\") pod \"dnsmasq-dns-666b6646f7-mgp7n\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.529707 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mgp7n\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.529786 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcwv6\" (UniqueName: \"kubernetes.io/projected/15427216-328f-499e-b505-0d689cbf31f2-kube-api-access-jcwv6\") pod \"dnsmasq-dns-666b6646f7-mgp7n\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.631402 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcwv6\" (UniqueName: \"kubernetes.io/projected/15427216-328f-499e-b505-0d689cbf31f2-kube-api-access-jcwv6\") pod \"dnsmasq-dns-666b6646f7-mgp7n\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.631444 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-config\") pod \"dnsmasq-dns-666b6646f7-mgp7n\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.631490 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mgp7n\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.632442 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mgp7n\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.644475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-config\") pod \"dnsmasq-dns-666b6646f7-mgp7n\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.704232 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcwv6\" (UniqueName: \"kubernetes.io/projected/15427216-328f-499e-b505-0d689cbf31f2-kube-api-access-jcwv6\") pod \"dnsmasq-dns-666b6646f7-mgp7n\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:12:56 crc kubenswrapper[4805]: I1216 12:12:56.789755 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.149507 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7wbj"] Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.197844 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jmhpf"] Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.199213 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.209843 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jmhpf"] Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.345915 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f68p\" (UniqueName: \"kubernetes.io/projected/2df3a617-03a6-49ea-b41f-bd29a3da10fa-kube-api-access-4f68p\") pod \"dnsmasq-dns-57d769cc4f-jmhpf\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.346321 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jmhpf\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.346412 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-config\") pod \"dnsmasq-dns-57d769cc4f-jmhpf\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.501276 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jmhpf\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.501359 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-config\") pod \"dnsmasq-dns-57d769cc4f-jmhpf\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.501405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f68p\" (UniqueName: \"kubernetes.io/projected/2df3a617-03a6-49ea-b41f-bd29a3da10fa-kube-api-access-4f68p\") pod \"dnsmasq-dns-57d769cc4f-jmhpf\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.502574 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-config\") pod \"dnsmasq-dns-57d769cc4f-jmhpf\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.502601 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jmhpf\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.535000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f68p\" (UniqueName: \"kubernetes.io/projected/2df3a617-03a6-49ea-b41f-bd29a3da10fa-kube-api-access-4f68p\") pod \"dnsmasq-dns-57d769cc4f-jmhpf\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.753796 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgp7n"] Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.830707 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.862925 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.864273 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.868491 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.868989 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.869172 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-twxjr" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.869927 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.870052 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.875542 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.876525 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 16 12:12:57 crc kubenswrapper[4805]: I1216 12:12:57.906434 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.021092 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.021394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.021433 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2smx4\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-kube-api-access-2smx4\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.021453 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55267a44-aaa0-494b-922a-014b08eddcd9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.021476 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-config-data\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.021497 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.021516 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55267a44-aaa0-494b-922a-014b08eddcd9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.021532 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.021556 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.021600 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.021627 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.118262 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" event={"ID":"15427216-328f-499e-b505-0d689cbf31f2","Type":"ContainerStarted","Data":"57259f0bdd8b6dca12e879671fc20d1061b2f4de9d787b590632ce99eda7af92"} Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.122822 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.122871 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.122957 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.122987 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.123025 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2smx4\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-kube-api-access-2smx4\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.123329 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.123566 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55267a44-aaa0-494b-922a-014b08eddcd9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.123614 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-config-data\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.123730 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.123761 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55267a44-aaa0-494b-922a-014b08eddcd9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.123799 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.123839 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.126849 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.127481 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.127587 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.128773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-config-data\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.133797 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55267a44-aaa0-494b-922a-014b08eddcd9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.139893 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55267a44-aaa0-494b-922a-014b08eddcd9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.140270 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.140357 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.147513 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2smx4\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-kube-api-access-2smx4\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.170358 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.207298 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.280825 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.331527 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jmhpf"] Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.339787 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.341877 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.353989 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.354280 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.354435 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.354550 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.354823 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k9stn" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.354920 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.355008 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.407751 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 16 12:12:58 crc kubenswrapper[4805]: W1216 12:12:58.413474 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2df3a617_03a6_49ea_b41f_bd29a3da10fa.slice/crio-d56c98a93f660d5bbe00d9092be1836b7cb2de8e5ebea0a29448350c0ff70663 WatchSource:0}: Error finding container d56c98a93f660d5bbe00d9092be1836b7cb2de8e5ebea0a29448350c0ff70663: Status 404 returned error can't find the container with id d56c98a93f660d5bbe00d9092be1836b7cb2de8e5ebea0a29448350c0ff70663 Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.435513 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.435591 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.435633 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/93a741b0-5bcd-407b-8af7-90bd52380217-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.435668 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.435727 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtpkg\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-kube-api-access-rtpkg\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.435777 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.435804 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/93a741b0-5bcd-407b-8af7-90bd52380217-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.435838 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.435873 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.435923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.435944 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.537678 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.537944 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.537975 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/93a741b0-5bcd-407b-8af7-90bd52380217-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.538008 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.538044 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtpkg\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-kube-api-access-rtpkg\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.538080 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.538104 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/93a741b0-5bcd-407b-8af7-90bd52380217-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.538131 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.538166 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.538199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.538227 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.539317 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.539400 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.540297 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.540457 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.540644 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.541756 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.544010 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/93a741b0-5bcd-407b-8af7-90bd52380217-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.544105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.544256 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.547514 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/93a741b0-5bcd-407b-8af7-90bd52380217-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.575087 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.624888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtpkg\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-kube-api-access-rtpkg\") pod \"rabbitmq-cell1-server-0\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:58 crc kubenswrapper[4805]: I1216 12:12:58.689049 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.060707 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 16 12:12:59 crc kubenswrapper[4805]: W1216 12:12:59.082866 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55267a44_aaa0_494b_922a_014b08eddcd9.slice/crio-dcca8b6be5f16cb4ee871e4c8511c00bcd461954c7833446a3891d0d65646e72 WatchSource:0}: Error finding container dcca8b6be5f16cb4ee871e4c8511c00bcd461954c7833446a3891d0d65646e72: Status 404 returned error can't find the container with id dcca8b6be5f16cb4ee871e4c8511c00bcd461954c7833446a3891d0d65646e72 Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.131341 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" event={"ID":"2df3a617-03a6-49ea-b41f-bd29a3da10fa","Type":"ContainerStarted","Data":"d56c98a93f660d5bbe00d9092be1836b7cb2de8e5ebea0a29448350c0ff70663"} Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.133193 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55267a44-aaa0-494b-922a-014b08eddcd9","Type":"ContainerStarted","Data":"dcca8b6be5f16cb4ee871e4c8511c00bcd461954c7833446a3891d0d65646e72"} Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.241355 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 16 12:12:59 crc kubenswrapper[4805]: W1216 12:12:59.294725 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93a741b0_5bcd_407b_8af7_90bd52380217.slice/crio-f3ae1d7a95100f2aee378b22642056b09074b48d7514572f09e95f055164e80b WatchSource:0}: Error finding container f3ae1d7a95100f2aee378b22642056b09074b48d7514572f09e95f055164e80b: Status 404 returned error can't find the container with id f3ae1d7a95100f2aee378b22642056b09074b48d7514572f09e95f055164e80b Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.385132 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.426086 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.426244 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.438005 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-f4t42" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.438218 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.438318 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.438461 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.438586 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.447057 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.576115 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/dd66837a-9e6f-41fd-91a0-f010e02a3a80-secrets\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.576172 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.576229 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd66837a-9e6f-41fd-91a0-f010e02a3a80-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.576275 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dd66837a-9e6f-41fd-91a0-f010e02a3a80-config-data-default\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.576296 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd66837a-9e6f-41fd-91a0-f010e02a3a80-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.576324 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd66837a-9e6f-41fd-91a0-f010e02a3a80-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.576341 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc6xl\" (UniqueName: \"kubernetes.io/projected/dd66837a-9e6f-41fd-91a0-f010e02a3a80-kube-api-access-bc6xl\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.576367 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dd66837a-9e6f-41fd-91a0-f010e02a3a80-kolla-config\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.576465 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dd66837a-9e6f-41fd-91a0-f010e02a3a80-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.695160 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dd66837a-9e6f-41fd-91a0-f010e02a3a80-config-data-default\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.695219 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd66837a-9e6f-41fd-91a0-f010e02a3a80-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.695247 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd66837a-9e6f-41fd-91a0-f010e02a3a80-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.695265 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc6xl\" (UniqueName: \"kubernetes.io/projected/dd66837a-9e6f-41fd-91a0-f010e02a3a80-kube-api-access-bc6xl\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.695283 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dd66837a-9e6f-41fd-91a0-f010e02a3a80-kolla-config\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.695319 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dd66837a-9e6f-41fd-91a0-f010e02a3a80-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.695365 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/dd66837a-9e6f-41fd-91a0-f010e02a3a80-secrets\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.695387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.695421 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd66837a-9e6f-41fd-91a0-f010e02a3a80-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.696370 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dd66837a-9e6f-41fd-91a0-f010e02a3a80-kolla-config\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.696509 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.696813 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dd66837a-9e6f-41fd-91a0-f010e02a3a80-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.697151 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dd66837a-9e6f-41fd-91a0-f010e02a3a80-config-data-default\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.697681 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd66837a-9e6f-41fd-91a0-f010e02a3a80-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.735578 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc6xl\" (UniqueName: \"kubernetes.io/projected/dd66837a-9e6f-41fd-91a0-f010e02a3a80-kube-api-access-bc6xl\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.737828 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd66837a-9e6f-41fd-91a0-f010e02a3a80-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.747595 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd66837a-9e6f-41fd-91a0-f010e02a3a80-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.747780 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/dd66837a-9e6f-41fd-91a0-f010e02a3a80-secrets\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.752643 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"dd66837a-9e6f-41fd-91a0-f010e02a3a80\") " pod="openstack/openstack-galera-0" Dec 16 12:12:59 crc kubenswrapper[4805]: I1216 12:12:59.761289 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.158395 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"93a741b0-5bcd-407b-8af7-90bd52380217","Type":"ContainerStarted","Data":"f3ae1d7a95100f2aee378b22642056b09074b48d7514572f09e95f055164e80b"} Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.543932 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 16 12:13:00 crc kubenswrapper[4805]: W1216 12:13:00.567846 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd66837a_9e6f_41fd_91a0_f010e02a3a80.slice/crio-c0ab24af4d10405eb6096233e04195e074fad0600d4c83f5361ee208a15f53e5 WatchSource:0}: Error finding container c0ab24af4d10405eb6096233e04195e074fad0600d4c83f5361ee208a15f53e5: Status 404 returned error can't find the container with id c0ab24af4d10405eb6096233e04195e074fad0600d4c83f5361ee208a15f53e5 Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.702107 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.707605 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.720481 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.721035 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.721117 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fvr24" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.721245 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.721802 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.822254 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b009857-5d9e-4d6e-979c-d7fc3357bd66-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.822307 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b009857-5d9e-4d6e-979c-d7fc3357bd66-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.822331 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b009857-5d9e-4d6e-979c-d7fc3357bd66-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.822475 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.822573 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdpwc\" (UniqueName: \"kubernetes.io/projected/2b009857-5d9e-4d6e-979c-d7fc3357bd66-kube-api-access-xdpwc\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.822605 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/2b009857-5d9e-4d6e-979c-d7fc3357bd66-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.822631 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b009857-5d9e-4d6e-979c-d7fc3357bd66-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.822702 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b009857-5d9e-4d6e-979c-d7fc3357bd66-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.822765 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b009857-5d9e-4d6e-979c-d7fc3357bd66-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.924364 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b009857-5d9e-4d6e-979c-d7fc3357bd66-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.924422 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b009857-5d9e-4d6e-979c-d7fc3357bd66-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.924469 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b009857-5d9e-4d6e-979c-d7fc3357bd66-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.924498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b009857-5d9e-4d6e-979c-d7fc3357bd66-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.924514 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b009857-5d9e-4d6e-979c-d7fc3357bd66-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.924553 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.924583 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdpwc\" (UniqueName: \"kubernetes.io/projected/2b009857-5d9e-4d6e-979c-d7fc3357bd66-kube-api-access-xdpwc\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.924603 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/2b009857-5d9e-4d6e-979c-d7fc3357bd66-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.924620 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b009857-5d9e-4d6e-979c-d7fc3357bd66-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.925571 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b009857-5d9e-4d6e-979c-d7fc3357bd66-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.925648 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b009857-5d9e-4d6e-979c-d7fc3357bd66-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.927814 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b009857-5d9e-4d6e-979c-d7fc3357bd66-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.928316 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.928766 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b009857-5d9e-4d6e-979c-d7fc3357bd66-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.949119 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b009857-5d9e-4d6e-979c-d7fc3357bd66-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.986585 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/2b009857-5d9e-4d6e-979c-d7fc3357bd66-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:00 crc kubenswrapper[4805]: I1216 12:13:00.986587 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b009857-5d9e-4d6e-979c-d7fc3357bd66-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.001155 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdpwc\" (UniqueName: \"kubernetes.io/projected/2b009857-5d9e-4d6e-979c-d7fc3357bd66-kube-api-access-xdpwc\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.057839 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2b009857-5d9e-4d6e-979c-d7fc3357bd66\") " pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.151353 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.153398 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.167752 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.168044 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-9bl9d" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.168224 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.203418 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.248214 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76cc6f3a-504f-4096-8c08-efbcb51ad101-config-data\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.248291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbqfn\" (UniqueName: \"kubernetes.io/projected/76cc6f3a-504f-4096-8c08-efbcb51ad101-kube-api-access-pbqfn\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.248321 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/76cc6f3a-504f-4096-8c08-efbcb51ad101-kolla-config\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.248373 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cc6f3a-504f-4096-8c08-efbcb51ad101-combined-ca-bundle\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.248404 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/76cc6f3a-504f-4096-8c08-efbcb51ad101-memcached-tls-certs\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.267890 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd66837a-9e6f-41fd-91a0-f010e02a3a80","Type":"ContainerStarted","Data":"c0ab24af4d10405eb6096233e04195e074fad0600d4c83f5361ee208a15f53e5"} Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.352701 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.353751 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cc6f3a-504f-4096-8c08-efbcb51ad101-combined-ca-bundle\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.353782 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/76cc6f3a-504f-4096-8c08-efbcb51ad101-memcached-tls-certs\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.353835 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76cc6f3a-504f-4096-8c08-efbcb51ad101-config-data\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.353867 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbqfn\" (UniqueName: \"kubernetes.io/projected/76cc6f3a-504f-4096-8c08-efbcb51ad101-kube-api-access-pbqfn\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.353886 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/76cc6f3a-504f-4096-8c08-efbcb51ad101-kolla-config\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.354486 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/76cc6f3a-504f-4096-8c08-efbcb51ad101-kolla-config\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.355034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76cc6f3a-504f-4096-8c08-efbcb51ad101-config-data\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.363221 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/76cc6f3a-504f-4096-8c08-efbcb51ad101-memcached-tls-certs\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.363653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cc6f3a-504f-4096-8c08-efbcb51ad101-combined-ca-bundle\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.378230 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbqfn\" (UniqueName: \"kubernetes.io/projected/76cc6f3a-504f-4096-8c08-efbcb51ad101-kube-api-access-pbqfn\") pod \"memcached-0\" (UID: \"76cc6f3a-504f-4096-8c08-efbcb51ad101\") " pod="openstack/memcached-0" Dec 16 12:13:01 crc kubenswrapper[4805]: I1216 12:13:01.484169 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 16 12:13:02 crc kubenswrapper[4805]: I1216 12:13:02.256918 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 16 12:13:02 crc kubenswrapper[4805]: I1216 12:13:02.604377 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 16 12:13:03 crc kubenswrapper[4805]: I1216 12:13:03.193026 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 16 12:13:03 crc kubenswrapper[4805]: I1216 12:13:03.194331 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 16 12:13:03 crc kubenswrapper[4805]: I1216 12:13:03.201985 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-d7fb9" Dec 16 12:13:03 crc kubenswrapper[4805]: I1216 12:13:03.210215 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 16 12:13:03 crc kubenswrapper[4805]: I1216 12:13:03.306281 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8np6\" (UniqueName: \"kubernetes.io/projected/3c9ccbdb-6316-4956-bc60-507faeeba295-kube-api-access-j8np6\") pod \"kube-state-metrics-0\" (UID: \"3c9ccbdb-6316-4956-bc60-507faeeba295\") " pod="openstack/kube-state-metrics-0" Dec 16 12:13:03 crc kubenswrapper[4805]: I1216 12:13:03.312497 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b009857-5d9e-4d6e-979c-d7fc3357bd66","Type":"ContainerStarted","Data":"d2303e45ce5350be696ba40e7db68c8fd3439b046cbafa997d29985d8e3fbaf5"} Dec 16 12:13:03 crc kubenswrapper[4805]: I1216 12:13:03.315797 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"76cc6f3a-504f-4096-8c08-efbcb51ad101","Type":"ContainerStarted","Data":"16115f669c16e0cbae70ec15873d986dfa6d0cfb787fd0537313d5a586a2f3fe"} Dec 16 12:13:03 crc kubenswrapper[4805]: I1216 12:13:03.408308 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8np6\" (UniqueName: \"kubernetes.io/projected/3c9ccbdb-6316-4956-bc60-507faeeba295-kube-api-access-j8np6\") pod \"kube-state-metrics-0\" (UID: \"3c9ccbdb-6316-4956-bc60-507faeeba295\") " pod="openstack/kube-state-metrics-0" Dec 16 12:13:03 crc kubenswrapper[4805]: I1216 12:13:03.451195 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8np6\" (UniqueName: \"kubernetes.io/projected/3c9ccbdb-6316-4956-bc60-507faeeba295-kube-api-access-j8np6\") pod \"kube-state-metrics-0\" (UID: \"3c9ccbdb-6316-4956-bc60-507faeeba295\") " pod="openstack/kube-state-metrics-0" Dec 16 12:13:03 crc kubenswrapper[4805]: I1216 12:13:03.529744 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 16 12:13:04 crc kubenswrapper[4805]: I1216 12:13:04.119794 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 16 12:13:04 crc kubenswrapper[4805]: I1216 12:13:04.368258 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c9ccbdb-6316-4956-bc60-507faeeba295","Type":"ContainerStarted","Data":"b3c8e2a8add0149944415a4f4f02c5881b842bd3366e15259ad9fab9b6ac20b4"} Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.599125 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9xw22"] Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.603067 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.611434 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.611701 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.611866 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rpc5j" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.624190 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9xw22"] Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.664876 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-var-log-ovn\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.664958 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-var-run\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.664996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-ovn-controller-tls-certs\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.665066 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmpr\" (UniqueName: \"kubernetes.io/projected/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-kube-api-access-glmpr\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.665134 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-var-run-ovn\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.665176 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-scripts\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.665212 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-combined-ca-bundle\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.704404 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-ffmtv"] Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.706247 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.725912 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ffmtv"] Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.766364 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-var-run\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.766428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-ovn-controller-tls-certs\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.766509 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glmpr\" (UniqueName: \"kubernetes.io/projected/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-kube-api-access-glmpr\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.766554 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-var-run-ovn\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.766569 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-scripts\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.766602 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-combined-ca-bundle\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.766632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-var-log-ovn\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.766986 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-var-run-ovn\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.767094 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-var-log-ovn\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.767213 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-var-run\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.777410 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-scripts\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.783640 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-ovn-controller-tls-certs\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.797988 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-combined-ca-bundle\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.806615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmpr\" (UniqueName: \"kubernetes.io/projected/f3e4543d-c48b-45a2-8eea-2584d5bba4b6-kube-api-access-glmpr\") pod \"ovn-controller-9xw22\" (UID: \"f3e4543d-c48b-45a2-8eea-2584d5bba4b6\") " pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.868432 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-var-log\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.868523 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-var-lib\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.868578 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-scripts\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.868605 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlmfb\" (UniqueName: \"kubernetes.io/projected/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-kube-api-access-xlmfb\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.868633 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-etc-ovs\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.868709 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-var-run\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.940847 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9xw22" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.976908 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-var-run\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.977020 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-var-log\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.977058 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-var-lib\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.977088 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-scripts\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.977120 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlmfb\" (UniqueName: \"kubernetes.io/projected/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-kube-api-access-xlmfb\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.977171 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-etc-ovs\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.977493 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-etc-ovs\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.977592 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-var-run\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.977701 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-var-log\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.978617 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-var-lib\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:05 crc kubenswrapper[4805]: I1216 12:13:05.980337 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-scripts\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:06 crc kubenswrapper[4805]: I1216 12:13:06.012903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlmfb\" (UniqueName: \"kubernetes.io/projected/b7c2d7ad-f96b-4eaa-b498-46e0739154f1-kube-api-access-xlmfb\") pod \"ovn-controller-ovs-ffmtv\" (UID: \"b7c2d7ad-f96b-4eaa-b498-46e0739154f1\") " pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:06 crc kubenswrapper[4805]: I1216 12:13:06.061751 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:06 crc kubenswrapper[4805]: I1216 12:13:06.608984 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9xw22"] Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.722292 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.724410 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.726525 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.730739 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nvwdr" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.730972 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.731113 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.731696 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.745132 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.745261 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d101de-9ba7-46dc-830e-3c25397a64d2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.745304 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/39d101de-9ba7-46dc-830e-3c25397a64d2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.745325 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d101de-9ba7-46dc-830e-3c25397a64d2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.745353 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6fkp\" (UniqueName: \"kubernetes.io/projected/39d101de-9ba7-46dc-830e-3c25397a64d2-kube-api-access-l6fkp\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.745373 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39d101de-9ba7-46dc-830e-3c25397a64d2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.745399 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39d101de-9ba7-46dc-830e-3c25397a64d2-config\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.745435 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d101de-9ba7-46dc-830e-3c25397a64d2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.753214 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.847360 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.847426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d101de-9ba7-46dc-830e-3c25397a64d2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.847472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/39d101de-9ba7-46dc-830e-3c25397a64d2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.847500 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d101de-9ba7-46dc-830e-3c25397a64d2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.847533 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6fkp\" (UniqueName: \"kubernetes.io/projected/39d101de-9ba7-46dc-830e-3c25397a64d2-kube-api-access-l6fkp\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.847555 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39d101de-9ba7-46dc-830e-3c25397a64d2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.847588 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39d101de-9ba7-46dc-830e-3c25397a64d2-config\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.847804 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d101de-9ba7-46dc-830e-3c25397a64d2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.848023 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.848884 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/39d101de-9ba7-46dc-830e-3c25397a64d2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.850184 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39d101de-9ba7-46dc-830e-3c25397a64d2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.852413 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39d101de-9ba7-46dc-830e-3c25397a64d2-config\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.858320 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d101de-9ba7-46dc-830e-3c25397a64d2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.859300 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d101de-9ba7-46dc-830e-3c25397a64d2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.871738 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d101de-9ba7-46dc-830e-3c25397a64d2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.883497 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.903876 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6fkp\" (UniqueName: \"kubernetes.io/projected/39d101de-9ba7-46dc-830e-3c25397a64d2-kube-api-access-l6fkp\") pod \"ovsdbserver-nb-0\" (UID: \"39d101de-9ba7-46dc-830e-3c25397a64d2\") " pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:07 crc kubenswrapper[4805]: I1216 12:13:07.984955 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ffmtv"] Dec 16 12:13:08 crc kubenswrapper[4805]: I1216 12:13:08.062325 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:08 crc kubenswrapper[4805]: W1216 12:13:08.628213 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3e4543d_c48b_45a2_8eea_2584d5bba4b6.slice/crio-6f391698d5c03b1c6f1b40a7bd81c53ff11319eb1021f3f1e3fe313075f96d70 WatchSource:0}: Error finding container 6f391698d5c03b1c6f1b40a7bd81c53ff11319eb1021f3f1e3fe313075f96d70: Status 404 returned error can't find the container with id 6f391698d5c03b1c6f1b40a7bd81c53ff11319eb1021f3f1e3fe313075f96d70 Dec 16 12:13:08 crc kubenswrapper[4805]: W1216 12:13:08.645989 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7c2d7ad_f96b_4eaa_b498_46e0739154f1.slice/crio-9142383d7caa9eda54943298f534d3b11480085a2163e4c0ed12eae63afb65ec WatchSource:0}: Error finding container 9142383d7caa9eda54943298f534d3b11480085a2163e4c0ed12eae63afb65ec: Status 404 returned error can't find the container with id 9142383d7caa9eda54943298f534d3b11480085a2163e4c0ed12eae63afb65ec Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.376925 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-j76qb"] Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.390595 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.393598 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.399030 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-j76qb"] Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.542688 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9xw22" event={"ID":"f3e4543d-c48b-45a2-8eea-2584d5bba4b6","Type":"ContainerStarted","Data":"6f391698d5c03b1c6f1b40a7bd81c53ff11319eb1021f3f1e3fe313075f96d70"} Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.548781 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ffmtv" event={"ID":"b7c2d7ad-f96b-4eaa-b498-46e0739154f1","Type":"ContainerStarted","Data":"9142383d7caa9eda54943298f534d3b11480085a2163e4c0ed12eae63afb65ec"} Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.595175 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-ovn-rundir\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.596778 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-config\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.596840 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv9dl\" (UniqueName: \"kubernetes.io/projected/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-kube-api-access-jv9dl\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.596888 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.596924 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-combined-ca-bundle\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.596971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-ovs-rundir\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.698535 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-ovn-rundir\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.698633 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-config\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.698674 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv9dl\" (UniqueName: \"kubernetes.io/projected/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-kube-api-access-jv9dl\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.698694 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.698715 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-combined-ca-bundle\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.698754 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-ovs-rundir\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.699093 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-ovs-rundir\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.699197 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-ovn-rundir\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.700478 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-config\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.723213 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv9dl\" (UniqueName: \"kubernetes.io/projected/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-kube-api-access-jv9dl\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.724349 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.731354 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d3d6086-fcc3-4ba0-af90-445bbcb3ff06-combined-ca-bundle\") pod \"ovn-controller-metrics-j76qb\" (UID: \"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06\") " pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:09 crc kubenswrapper[4805]: I1216 12:13:09.732885 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-j76qb" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.554739 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.556576 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.556667 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.562523 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.562619 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.562746 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.563749 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-br7rh" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.731704 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfbd99cf-6302-48c2-b119-1378b47d7c6d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.731767 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbd99cf-6302-48c2-b119-1378b47d7c6d-config\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.731804 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dfbd99cf-6302-48c2-b119-1378b47d7c6d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.731871 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfbd99cf-6302-48c2-b119-1378b47d7c6d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.731895 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfbd99cf-6302-48c2-b119-1378b47d7c6d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.731917 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4q8c\" (UniqueName: \"kubernetes.io/projected/dfbd99cf-6302-48c2-b119-1378b47d7c6d-kube-api-access-r4q8c\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.731960 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.732004 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfbd99cf-6302-48c2-b119-1378b47d7c6d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.833897 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfbd99cf-6302-48c2-b119-1378b47d7c6d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.833948 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfbd99cf-6302-48c2-b119-1378b47d7c6d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.833977 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4q8c\" (UniqueName: \"kubernetes.io/projected/dfbd99cf-6302-48c2-b119-1378b47d7c6d-kube-api-access-r4q8c\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.834018 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.834049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfbd99cf-6302-48c2-b119-1378b47d7c6d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.834090 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfbd99cf-6302-48c2-b119-1378b47d7c6d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.834154 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbd99cf-6302-48c2-b119-1378b47d7c6d-config\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.834197 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dfbd99cf-6302-48c2-b119-1378b47d7c6d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.834706 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dfbd99cf-6302-48c2-b119-1378b47d7c6d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.835227 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.838019 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbd99cf-6302-48c2-b119-1378b47d7c6d-config\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.838652 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfbd99cf-6302-48c2-b119-1378b47d7c6d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.842920 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfbd99cf-6302-48c2-b119-1378b47d7c6d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.844343 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfbd99cf-6302-48c2-b119-1378b47d7c6d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.844533 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfbd99cf-6302-48c2-b119-1378b47d7c6d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.853612 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4q8c\" (UniqueName: \"kubernetes.io/projected/dfbd99cf-6302-48c2-b119-1378b47d7c6d-kube-api-access-r4q8c\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.866366 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dfbd99cf-6302-48c2-b119-1378b47d7c6d\") " pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:10 crc kubenswrapper[4805]: I1216 12:13:10.878785 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:20 crc kubenswrapper[4805]: E1216 12:13:20.973167 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 16 12:13:20 crc kubenswrapper[4805]: E1216 12:13:20.973990 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtpkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(93a741b0-5bcd-407b-8af7-90bd52380217): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:13:20 crc kubenswrapper[4805]: E1216 12:13:20.975250 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="93a741b0-5bcd-407b-8af7-90bd52380217" Dec 16 12:13:21 crc kubenswrapper[4805]: E1216 12:13:21.666404 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="93a741b0-5bcd-407b-8af7-90bd52380217" Dec 16 12:13:31 crc kubenswrapper[4805]: E1216 12:13:31.697966 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 16 12:13:31 crc kubenswrapper[4805]: E1216 12:13:31.698670 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:DB_ROOT_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:DbRootPassword,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secrets,ReadOnly:true,MountPath:/var/lib/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdpwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(2b009857-5d9e-4d6e-979c-d7fc3357bd66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:13:31 crc kubenswrapper[4805]: E1216 12:13:31.699908 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="2b009857-5d9e-4d6e-979c-d7fc3357bd66" Dec 16 12:13:31 crc kubenswrapper[4805]: E1216 12:13:31.732498 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="2b009857-5d9e-4d6e-979c-d7fc3357bd66" Dec 16 12:13:31 crc kubenswrapper[4805]: E1216 12:13:31.739369 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 16 12:13:31 crc kubenswrapper[4805]: E1216 12:13:31.739594 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:DB_ROOT_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:DbRootPassword,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secrets,ReadOnly:true,MountPath:/var/lib/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bc6xl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(dd66837a-9e6f-41fd-91a0-f010e02a3a80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:13:31 crc kubenswrapper[4805]: E1216 12:13:31.740852 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="dd66837a-9e6f-41fd-91a0-f010e02a3a80" Dec 16 12:13:32 crc kubenswrapper[4805]: E1216 12:13:32.743058 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="dd66837a-9e6f-41fd-91a0-f010e02a3a80" Dec 16 12:13:33 crc kubenswrapper[4805]: E1216 12:13:33.471397 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Dec 16 12:13:33 crc kubenswrapper[4805]: E1216 12:13:33.471873 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6fh68h59bh68fhc4h5d8h59bh67bhd5h646hdfh64ch565h686h5fbh5b4hb8h56h694h7dh668h699h5b8h668h5d5hf4h57fhf7h56fh568h697h686q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xlmfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-ffmtv_openstack(b7c2d7ad-f96b-4eaa-b498-46e0739154f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:13:33 crc kubenswrapper[4805]: E1216 12:13:33.473061 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-ffmtv" podUID="b7c2d7ad-f96b-4eaa-b498-46e0739154f1" Dec 16 12:13:33 crc kubenswrapper[4805]: E1216 12:13:33.750249 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-ffmtv" podUID="b7c2d7ad-f96b-4eaa-b498-46e0739154f1" Dec 16 12:13:39 crc kubenswrapper[4805]: I1216 12:13:39.294814 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.769831 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.770076 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4f68p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-jmhpf_openstack(2df3a617-03a6-49ea-b41f-bd29a3da10fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.773100 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" podUID="2df3a617-03a6-49ea-b41f-bd29a3da10fa" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.775382 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.775563 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcwv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-mgp7n_openstack(15427216-328f-499e-b505-0d689cbf31f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.777619 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" podUID="15427216-328f-499e-b505-0d689cbf31f2" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.815975 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" podUID="15427216-328f-499e-b505-0d689cbf31f2" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.816009 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" podUID="2df3a617-03a6-49ea-b41f-bd29a3da10fa" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.846727 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.846906 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcs5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-g7wbj_openstack(b5bc9b8a-1d0a-4284-8f33-2000536199c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.847980 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" podUID="b5bc9b8a-1d0a-4284-8f33-2000536199c8" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.866461 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.866625 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdf4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-stv8h_openstack(fedda822-568a-413e-a6aa-bb1a31c7ae40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:13:39 crc kubenswrapper[4805]: E1216 12:13:39.867891 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" podUID="fedda822-568a-413e-a6aa-bb1a31c7ae40" Dec 16 12:13:40 crc kubenswrapper[4805]: E1216 12:13:40.153474 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Dec 16 12:13:40 crc kubenswrapper[4805]: E1216 12:13:40.154582 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6fh68h59bh68fhc4h5d8h59bh67bhd5h646hdfh64ch565h686h5fbh5b4hb8h56h694h7dh668h699h5b8h668h5d5hf4h57fhf7h56fh568h697h686q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glmpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-9xw22_openstack(f3e4543d-c48b-45a2-8eea-2584d5bba4b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:13:40 crc kubenswrapper[4805]: E1216 12:13:40.155781 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-9xw22" podUID="f3e4543d-c48b-45a2-8eea-2584d5bba4b6" Dec 16 12:13:40 crc kubenswrapper[4805]: I1216 12:13:40.274782 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-j76qb"] Dec 16 12:13:40 crc kubenswrapper[4805]: W1216 12:13:40.465675 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d3d6086_fcc3_4ba0_af90_445bbcb3ff06.slice/crio-df9654b6b0cf6d0111fd597adcf6fdffcdfe3eeab6722347674d46965257b004 WatchSource:0}: Error finding container df9654b6b0cf6d0111fd597adcf6fdffcdfe3eeab6722347674d46965257b004: Status 404 returned error can't find the container with id df9654b6b0cf6d0111fd597adcf6fdffcdfe3eeab6722347674d46965257b004 Dec 16 12:13:40 crc kubenswrapper[4805]: I1216 12:13:40.788645 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 16 12:13:40 crc kubenswrapper[4805]: I1216 12:13:40.819476 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"39d101de-9ba7-46dc-830e-3c25397a64d2","Type":"ContainerStarted","Data":"62d1357d2d29a3c1c747c2c90f1a0c5f04d5079fcb376185ffe15f716d74c3ac"} Dec 16 12:13:40 crc kubenswrapper[4805]: I1216 12:13:40.821640 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-j76qb" event={"ID":"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06","Type":"ContainerStarted","Data":"df9654b6b0cf6d0111fd597adcf6fdffcdfe3eeab6722347674d46965257b004"} Dec 16 12:13:40 crc kubenswrapper[4805]: E1216 12:13:40.824620 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-9xw22" podUID="f3e4543d-c48b-45a2-8eea-2584d5bba4b6" Dec 16 12:13:41 crc kubenswrapper[4805]: E1216 12:13:41.529567 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 16 12:13:41 crc kubenswrapper[4805]: E1216 12:13:41.529618 4805 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 16 12:13:41 crc kubenswrapper[4805]: E1216 12:13:41.529754 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8np6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(3c9ccbdb-6316-4956-bc60-507faeeba295): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 16 12:13:41 crc kubenswrapper[4805]: E1216 12:13:41.530956 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="3c9ccbdb-6316-4956-bc60-507faeeba295" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.716011 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.756274 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.834058 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dfbd99cf-6302-48c2-b119-1378b47d7c6d","Type":"ContainerStarted","Data":"f574a8fa90f2caafc9be480faf4bdb0a6f3f8d7d95db26dd4fa30e1994fc130d"} Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.835429 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.835424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-stv8h" event={"ID":"fedda822-568a-413e-a6aa-bb1a31c7ae40","Type":"ContainerDied","Data":"f827d65d52b715f044b00cf799c26c02d5b57b607b9237852415f8d6b510e899"} Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.838887 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.839129 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g7wbj" event={"ID":"b5bc9b8a-1d0a-4284-8f33-2000536199c8","Type":"ContainerDied","Data":"952b3c0aa859113cf211d904e7501ae9ef13b6ed49ba8cc6831d3fcd81c8097d"} Dec 16 12:13:41 crc kubenswrapper[4805]: E1216 12:13:41.839663 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="3c9ccbdb-6316-4956-bc60-507faeeba295" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.869467 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdf4d\" (UniqueName: \"kubernetes.io/projected/fedda822-568a-413e-a6aa-bb1a31c7ae40-kube-api-access-vdf4d\") pod \"fedda822-568a-413e-a6aa-bb1a31c7ae40\" (UID: \"fedda822-568a-413e-a6aa-bb1a31c7ae40\") " Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.869549 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-config\") pod \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.869597 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-dns-svc\") pod \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.869638 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedda822-568a-413e-a6aa-bb1a31c7ae40-config\") pod \"fedda822-568a-413e-a6aa-bb1a31c7ae40\" (UID: \"fedda822-568a-413e-a6aa-bb1a31c7ae40\") " Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.869710 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcs5k\" (UniqueName: \"kubernetes.io/projected/b5bc9b8a-1d0a-4284-8f33-2000536199c8-kube-api-access-kcs5k\") pod \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\" (UID: \"b5bc9b8a-1d0a-4284-8f33-2000536199c8\") " Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.871159 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5bc9b8a-1d0a-4284-8f33-2000536199c8" (UID: "b5bc9b8a-1d0a-4284-8f33-2000536199c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.871721 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-config" (OuterVolumeSpecName: "config") pod "b5bc9b8a-1d0a-4284-8f33-2000536199c8" (UID: "b5bc9b8a-1d0a-4284-8f33-2000536199c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.972968 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fedda822-568a-413e-a6aa-bb1a31c7ae40-kube-api-access-vdf4d" (OuterVolumeSpecName: "kube-api-access-vdf4d") pod "fedda822-568a-413e-a6aa-bb1a31c7ae40" (UID: "fedda822-568a-413e-a6aa-bb1a31c7ae40"). InnerVolumeSpecName "kube-api-access-vdf4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.973365 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.973478 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5bc9b8a-1d0a-4284-8f33-2000536199c8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.973586 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdf4d\" (UniqueName: \"kubernetes.io/projected/fedda822-568a-413e-a6aa-bb1a31c7ae40-kube-api-access-vdf4d\") on node \"crc\" DevicePath \"\"" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.981271 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedda822-568a-413e-a6aa-bb1a31c7ae40-config" (OuterVolumeSpecName: "config") pod "fedda822-568a-413e-a6aa-bb1a31c7ae40" (UID: "fedda822-568a-413e-a6aa-bb1a31c7ae40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:13:41 crc kubenswrapper[4805]: I1216 12:13:41.995662 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5bc9b8a-1d0a-4284-8f33-2000536199c8-kube-api-access-kcs5k" (OuterVolumeSpecName: "kube-api-access-kcs5k") pod "b5bc9b8a-1d0a-4284-8f33-2000536199c8" (UID: "b5bc9b8a-1d0a-4284-8f33-2000536199c8"). InnerVolumeSpecName "kube-api-access-kcs5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:13:42 crc kubenswrapper[4805]: I1216 12:13:42.075564 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcs5k\" (UniqueName: \"kubernetes.io/projected/b5bc9b8a-1d0a-4284-8f33-2000536199c8-kube-api-access-kcs5k\") on node \"crc\" DevicePath \"\"" Dec 16 12:13:42 crc kubenswrapper[4805]: I1216 12:13:42.075602 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fedda822-568a-413e-a6aa-bb1a31c7ae40-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:13:42 crc kubenswrapper[4805]: I1216 12:13:42.209105 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-stv8h"] Dec 16 12:13:42 crc kubenswrapper[4805]: I1216 12:13:42.234998 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-stv8h"] Dec 16 12:13:42 crc kubenswrapper[4805]: I1216 12:13:42.263443 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7wbj"] Dec 16 12:13:42 crc kubenswrapper[4805]: I1216 12:13:42.276171 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g7wbj"] Dec 16 12:13:42 crc kubenswrapper[4805]: I1216 12:13:42.542075 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5bc9b8a-1d0a-4284-8f33-2000536199c8" path="/var/lib/kubelet/pods/b5bc9b8a-1d0a-4284-8f33-2000536199c8/volumes" Dec 16 12:13:42 crc kubenswrapper[4805]: I1216 12:13:42.542732 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fedda822-568a-413e-a6aa-bb1a31c7ae40" path="/var/lib/kubelet/pods/fedda822-568a-413e-a6aa-bb1a31c7ae40/volumes" Dec 16 12:13:42 crc kubenswrapper[4805]: I1216 12:13:42.858638 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"76cc6f3a-504f-4096-8c08-efbcb51ad101","Type":"ContainerStarted","Data":"9161bac478ac5f4721c63535dcefeb63dfb966ac109058b84a6a80a9afeccfb3"} Dec 16 12:13:42 crc kubenswrapper[4805]: I1216 12:13:42.858885 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 16 12:13:42 crc kubenswrapper[4805]: I1216 12:13:42.892597 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.429061518 podStartE2EDuration="41.892571712s" podCreationTimestamp="2025-12-16 12:13:01 +0000 UTC" firstStartedPulling="2025-12-16 12:13:02.694051209 +0000 UTC m=+1056.412309014" lastFinishedPulling="2025-12-16 12:13:40.157561403 +0000 UTC m=+1093.875819208" observedRunningTime="2025-12-16 12:13:42.886776515 +0000 UTC m=+1096.605034340" watchObservedRunningTime="2025-12-16 12:13:42.892571712 +0000 UTC m=+1096.610829527" Dec 16 12:13:43 crc kubenswrapper[4805]: I1216 12:13:43.872630 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"93a741b0-5bcd-407b-8af7-90bd52380217","Type":"ContainerStarted","Data":"87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644"} Dec 16 12:13:46 crc kubenswrapper[4805]: I1216 12:13:46.488098 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 16 12:13:47 crc kubenswrapper[4805]: I1216 12:13:47.926675 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55267a44-aaa0-494b-922a-014b08eddcd9","Type":"ContainerStarted","Data":"ba41853f3136944c440c35ba755957daedde0ef9f029dbcc7b426ba254586616"} Dec 16 12:13:47 crc kubenswrapper[4805]: I1216 12:13:47.929709 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b009857-5d9e-4d6e-979c-d7fc3357bd66","Type":"ContainerStarted","Data":"ac22f1d397c8a12da84d5aec4a621a1d321d3c1407be31991889f6c4ab46eb8c"} Dec 16 12:13:47 crc kubenswrapper[4805]: I1216 12:13:47.938418 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"39d101de-9ba7-46dc-830e-3c25397a64d2","Type":"ContainerStarted","Data":"db43b3dee7da4bdeac78893c1e099b60d24be525417f36a26bfe44ea25b9362e"} Dec 16 12:13:47 crc kubenswrapper[4805]: I1216 12:13:47.938790 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"39d101de-9ba7-46dc-830e-3c25397a64d2","Type":"ContainerStarted","Data":"4a04d32cad8b5bb6f34d6e8d46bc78263a1a20b7ea4ffcaeaec72322bf27d5a7"} Dec 16 12:13:47 crc kubenswrapper[4805]: I1216 12:13:47.941769 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dfbd99cf-6302-48c2-b119-1378b47d7c6d","Type":"ContainerStarted","Data":"59992f9f606bfcefed2be148ceda938a2fd3b913880c3d348c963d36c54e1b59"} Dec 16 12:13:47 crc kubenswrapper[4805]: I1216 12:13:47.941812 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dfbd99cf-6302-48c2-b119-1378b47d7c6d","Type":"ContainerStarted","Data":"49ba88e63612e15f959ac44456bd8f9e07b7e660a8d083e2faf6f37a63ed0a41"} Dec 16 12:13:48 crc kubenswrapper[4805]: I1216 12:13:48.010748 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=32.514383569 podStartE2EDuration="39.010726209s" podCreationTimestamp="2025-12-16 12:13:09 +0000 UTC" firstStartedPulling="2025-12-16 12:13:40.817511855 +0000 UTC m=+1094.535769660" lastFinishedPulling="2025-12-16 12:13:47.313854495 +0000 UTC m=+1101.032112300" observedRunningTime="2025-12-16 12:13:48.003616794 +0000 UTC m=+1101.721874609" watchObservedRunningTime="2025-12-16 12:13:48.010726209 +0000 UTC m=+1101.728984024" Dec 16 12:13:48 crc kubenswrapper[4805]: I1216 12:13:48.035055 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=34.887984062 podStartE2EDuration="42.035038488s" podCreationTimestamp="2025-12-16 12:13:06 +0000 UTC" firstStartedPulling="2025-12-16 12:13:40.166053937 +0000 UTC m=+1093.884311742" lastFinishedPulling="2025-12-16 12:13:47.313108353 +0000 UTC m=+1101.031366168" observedRunningTime="2025-12-16 12:13:48.029397316 +0000 UTC m=+1101.747655121" watchObservedRunningTime="2025-12-16 12:13:48.035038488 +0000 UTC m=+1101.753296303" Dec 16 12:13:48 crc kubenswrapper[4805]: I1216 12:13:48.063798 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:48 crc kubenswrapper[4805]: I1216 12:13:48.960523 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd66837a-9e6f-41fd-91a0-f010e02a3a80","Type":"ContainerStarted","Data":"5d2a32a6781c31f61b711f10b716b059c8e5d0cc0305e51ece70e16c4aa23c56"} Dec 16 12:13:48 crc kubenswrapper[4805]: I1216 12:13:48.965127 4805 generic.go:334] "Generic (PLEG): container finished" podID="b7c2d7ad-f96b-4eaa-b498-46e0739154f1" containerID="a0a399d3e056c25d06829308135ef87726b6bf6f530e3fc51d9b5974208d2ae4" exitCode=0 Dec 16 12:13:48 crc kubenswrapper[4805]: I1216 12:13:48.965174 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ffmtv" event={"ID":"b7c2d7ad-f96b-4eaa-b498-46e0739154f1","Type":"ContainerDied","Data":"a0a399d3e056c25d06829308135ef87726b6bf6f530e3fc51d9b5974208d2ae4"} Dec 16 12:13:48 crc kubenswrapper[4805]: I1216 12:13:48.967910 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-j76qb" event={"ID":"2d3d6086-fcc3-4ba0-af90-445bbcb3ff06","Type":"ContainerStarted","Data":"850630e255c6b2a485c6333362f575873c8d7b4e71ff402e156603f0bb2f1bfa"} Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.047982 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-j76qb" podStartSLOduration=32.917959878 podStartE2EDuration="40.047961064s" podCreationTimestamp="2025-12-16 12:13:09 +0000 UTC" firstStartedPulling="2025-12-16 12:13:40.467881178 +0000 UTC m=+1094.186138983" lastFinishedPulling="2025-12-16 12:13:47.597882364 +0000 UTC m=+1101.316140169" observedRunningTime="2025-12-16 12:13:49.036536465 +0000 UTC m=+1102.754794290" watchObservedRunningTime="2025-12-16 12:13:49.047961064 +0000 UTC m=+1102.766218889" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.482994 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgp7n"] Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.525069 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x944g"] Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.526985 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.542728 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.552175 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.552312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plzjq\" (UniqueName: \"kubernetes.io/projected/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-kube-api-access-plzjq\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.552338 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-config\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.552537 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.575811 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x944g"] Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.654491 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.654626 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plzjq\" (UniqueName: \"kubernetes.io/projected/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-kube-api-access-plzjq\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.654653 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-config\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.654796 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.655466 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.656131 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-config\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.657432 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.721919 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plzjq\" (UniqueName: \"kubernetes.io/projected/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-kube-api-access-plzjq\") pod \"dnsmasq-dns-7fd796d7df-x944g\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.795716 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jmhpf"] Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.862092 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.862503 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x7sc"] Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.864124 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.867267 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.886336 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.897885 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x7sc"] Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.962065 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.962419 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-config\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.962456 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.962481 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:49 crc kubenswrapper[4805]: I1216 12:13:49.962585 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7dfv\" (UniqueName: \"kubernetes.io/projected/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-kube-api-access-d7dfv\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.000833 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ffmtv" event={"ID":"b7c2d7ad-f96b-4eaa-b498-46e0739154f1","Type":"ContainerStarted","Data":"7e14cdb9e654e99f789a82b025364fda136b0ee1318cb9e4ecaf088f887e48bb"} Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.068887 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.069040 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.069079 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-config\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.069110 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.069134 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.069208 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7dfv\" (UniqueName: \"kubernetes.io/projected/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-kube-api-access-d7dfv\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.070494 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-config\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.071120 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.071915 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.071937 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.159053 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7dfv\" (UniqueName: \"kubernetes.io/projected/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-kube-api-access-d7dfv\") pod \"dnsmasq-dns-86db49b7ff-9x7sc\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.240963 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.286757 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.445319 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.551805 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.579831 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-dns-svc\") pod \"15427216-328f-499e-b505-0d689cbf31f2\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.579905 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcwv6\" (UniqueName: \"kubernetes.io/projected/15427216-328f-499e-b505-0d689cbf31f2-kube-api-access-jcwv6\") pod \"15427216-328f-499e-b505-0d689cbf31f2\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.580107 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-config\") pod \"15427216-328f-499e-b505-0d689cbf31f2\" (UID: \"15427216-328f-499e-b505-0d689cbf31f2\") " Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.581112 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-config" (OuterVolumeSpecName: "config") pod "15427216-328f-499e-b505-0d689cbf31f2" (UID: "15427216-328f-499e-b505-0d689cbf31f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.581603 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15427216-328f-499e-b505-0d689cbf31f2" (UID: "15427216-328f-499e-b505-0d689cbf31f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.588807 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15427216-328f-499e-b505-0d689cbf31f2-kube-api-access-jcwv6" (OuterVolumeSpecName: "kube-api-access-jcwv6") pod "15427216-328f-499e-b505-0d689cbf31f2" (UID: "15427216-328f-499e-b505-0d689cbf31f2"). InnerVolumeSpecName "kube-api-access-jcwv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.681408 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f68p\" (UniqueName: \"kubernetes.io/projected/2df3a617-03a6-49ea-b41f-bd29a3da10fa-kube-api-access-4f68p\") pod \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.682092 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-dns-svc\") pod \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.682214 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-config\") pod \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\" (UID: \"2df3a617-03a6-49ea-b41f-bd29a3da10fa\") " Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.684570 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-config" (OuterVolumeSpecName: "config") pod "2df3a617-03a6-49ea-b41f-bd29a3da10fa" (UID: "2df3a617-03a6-49ea-b41f-bd29a3da10fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.687070 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2df3a617-03a6-49ea-b41f-bd29a3da10fa" (UID: "2df3a617-03a6-49ea-b41f-bd29a3da10fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.693467 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.693504 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15427216-328f-499e-b505-0d689cbf31f2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.693516 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.693528 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcwv6\" (UniqueName: \"kubernetes.io/projected/15427216-328f-499e-b505-0d689cbf31f2-kube-api-access-jcwv6\") on node \"crc\" DevicePath \"\"" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.693541 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2df3a617-03a6-49ea-b41f-bd29a3da10fa-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.711365 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df3a617-03a6-49ea-b41f-bd29a3da10fa-kube-api-access-4f68p" (OuterVolumeSpecName: "kube-api-access-4f68p") pod "2df3a617-03a6-49ea-b41f-bd29a3da10fa" (UID: "2df3a617-03a6-49ea-b41f-bd29a3da10fa"). InnerVolumeSpecName "kube-api-access-4f68p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.723901 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x944g"] Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.794666 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f68p\" (UniqueName: \"kubernetes.io/projected/2df3a617-03a6-49ea-b41f-bd29a3da10fa-kube-api-access-4f68p\") on node \"crc\" DevicePath \"\"" Dec 16 12:13:50 crc kubenswrapper[4805]: I1216 12:13:50.879218 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.007997 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" event={"ID":"15427216-328f-499e-b505-0d689cbf31f2","Type":"ContainerDied","Data":"57259f0bdd8b6dca12e879671fc20d1061b2f4de9d787b590632ce99eda7af92"} Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.008155 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mgp7n" Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.011316 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" event={"ID":"2df3a617-03a6-49ea-b41f-bd29a3da10fa","Type":"ContainerDied","Data":"d56c98a93f660d5bbe00d9092be1836b7cb2de8e5ebea0a29448350c0ff70663"} Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.011379 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jmhpf" Dec 16 12:13:51 crc kubenswrapper[4805]: W1216 12:13:51.016424 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cb53c8d_e006_4a9a_b2f9_4d7c34732801.slice/crio-99a6a0ae3cbb743b50fd3c2a5b8d026d471cee8029a53c7eb46c220760bb9c82 WatchSource:0}: Error finding container 99a6a0ae3cbb743b50fd3c2a5b8d026d471cee8029a53c7eb46c220760bb9c82: Status 404 returned error can't find the container with id 99a6a0ae3cbb743b50fd3c2a5b8d026d471cee8029a53c7eb46c220760bb9c82 Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.017795 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ffmtv" event={"ID":"b7c2d7ad-f96b-4eaa-b498-46e0739154f1","Type":"ContainerStarted","Data":"0ac9d76578dde4eb7a43ecf950562bb40597d36d802b0558d10d5d5b416442a2"} Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.017995 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.018117 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.018900 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x7sc"] Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.025528 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-x944g" event={"ID":"ebaf00e1-0933-49b6-9c08-9e08ed8a7890","Type":"ContainerStarted","Data":"f61d67b8791201c4ded0c2fe7a50ca0df3f56e7c71761512a1cf34cad7cb0f41"} Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.071315 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgp7n"] Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.099238 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgp7n"] Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.099923 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-ffmtv" podStartSLOduration=6.704931177 podStartE2EDuration="46.099904045s" podCreationTimestamp="2025-12-16 12:13:05 +0000 UTC" firstStartedPulling="2025-12-16 12:13:08.659748416 +0000 UTC m=+1062.378006221" lastFinishedPulling="2025-12-16 12:13:48.054721284 +0000 UTC m=+1101.772979089" observedRunningTime="2025-12-16 12:13:51.092180772 +0000 UTC m=+1104.810438607" watchObservedRunningTime="2025-12-16 12:13:51.099904045 +0000 UTC m=+1104.818161860" Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.152201 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jmhpf"] Dec 16 12:13:51 crc kubenswrapper[4805]: I1216 12:13:51.158195 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jmhpf"] Dec 16 12:13:52 crc kubenswrapper[4805]: I1216 12:13:52.036306 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" event={"ID":"1cb53c8d-e006-4a9a-b2f9-4d7c34732801","Type":"ContainerStarted","Data":"99a6a0ae3cbb743b50fd3c2a5b8d026d471cee8029a53c7eb46c220760bb9c82"} Dec 16 12:13:52 crc kubenswrapper[4805]: I1216 12:13:52.090045 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 16 12:13:52 crc kubenswrapper[4805]: I1216 12:13:52.741777 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15427216-328f-499e-b505-0d689cbf31f2" path="/var/lib/kubelet/pods/15427216-328f-499e-b505-0d689cbf31f2/volumes" Dec 16 12:13:52 crc kubenswrapper[4805]: I1216 12:13:52.742471 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2df3a617-03a6-49ea-b41f-bd29a3da10fa" path="/var/lib/kubelet/pods/2df3a617-03a6-49ea-b41f-bd29a3da10fa/volumes" Dec 16 12:13:52 crc kubenswrapper[4805]: I1216 12:13:52.918608 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:52 crc kubenswrapper[4805]: I1216 12:13:52.970165 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.104665 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.106076 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.139860 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.141609 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.142027 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.146002 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-pbj82" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.154154 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.211190 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x944g"] Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.225124 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87517712-55f8-42c7-8a23-cb388090ed3c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.225202 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87517712-55f8-42c7-8a23-cb388090ed3c-config\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.225249 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/87517712-55f8-42c7-8a23-cb388090ed3c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.225272 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7qj6\" (UniqueName: \"kubernetes.io/projected/87517712-55f8-42c7-8a23-cb388090ed3c-kube-api-access-g7qj6\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.225291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/87517712-55f8-42c7-8a23-cb388090ed3c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.225327 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/87517712-55f8-42c7-8a23-cb388090ed3c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.225349 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87517712-55f8-42c7-8a23-cb388090ed3c-scripts\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.302152 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqth5"] Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.304214 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.326808 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/87517712-55f8-42c7-8a23-cb388090ed3c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.326866 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7qj6\" (UniqueName: \"kubernetes.io/projected/87517712-55f8-42c7-8a23-cb388090ed3c-kube-api-access-g7qj6\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.326895 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/87517712-55f8-42c7-8a23-cb388090ed3c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.326936 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/87517712-55f8-42c7-8a23-cb388090ed3c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.326975 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87517712-55f8-42c7-8a23-cb388090ed3c-scripts\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.327018 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87517712-55f8-42c7-8a23-cb388090ed3c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.327050 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87517712-55f8-42c7-8a23-cb388090ed3c-config\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.327898 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87517712-55f8-42c7-8a23-cb388090ed3c-config\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.332096 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87517712-55f8-42c7-8a23-cb388090ed3c-scripts\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.333054 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/87517712-55f8-42c7-8a23-cb388090ed3c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.339787 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/87517712-55f8-42c7-8a23-cb388090ed3c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.347713 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/87517712-55f8-42c7-8a23-cb388090ed3c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.347849 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87517712-55f8-42c7-8a23-cb388090ed3c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.388224 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7qj6\" (UniqueName: \"kubernetes.io/projected/87517712-55f8-42c7-8a23-cb388090ed3c-kube-api-access-g7qj6\") pod \"ovn-northd-0\" (UID: \"87517712-55f8-42c7-8a23-cb388090ed3c\") " pod="openstack/ovn-northd-0" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.407241 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqth5"] Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.428672 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-dns-svc\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.428716 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ht2k\" (UniqueName: \"kubernetes.io/projected/a2cc735d-1342-4b12-a93e-d115daf8c3f3-kube-api-access-7ht2k\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.428894 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-config\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.429087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.429122 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:54 crc kubenswrapper[4805]: I1216 12:13:54.437446 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.045705 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-dns-svc\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.045745 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ht2k\" (UniqueName: \"kubernetes.io/projected/a2cc735d-1342-4b12-a93e-d115daf8c3f3-kube-api-access-7ht2k\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.045811 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-config\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.045887 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.045903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.047428 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.048729 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-dns-svc\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.049548 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-config\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.050111 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.079418 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ht2k\" (UniqueName: \"kubernetes.io/projected/a2cc735d-1342-4b12-a93e-d115daf8c3f3-kube-api-access-7ht2k\") pod \"dnsmasq-dns-698758b865-jqth5\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.219735 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.878557 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.884995 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.890471 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.890853 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-4d5nm" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.892939 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.926236 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.939020 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-xsqmm"] Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.940432 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.945331 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.945451 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.961226 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 16 12:13:55 crc kubenswrapper[4805]: I1216 12:13:55.974847 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-xsqmm"] Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.033256 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c7ed9b27-9804-4584-a244-30ba1f033e17-lock\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.033584 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgm9\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-kube-api-access-kcgm9\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.033620 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c7ed9b27-9804-4584-a244-30ba1f033e17-cache\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.033664 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.033704 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.100087 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386294 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386383 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-scripts\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c7ed9b27-9804-4584-a244-30ba1f033e17-lock\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386457 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-dispersionconf\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386497 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcgm9\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-kube-api-access-kcgm9\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386534 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-etc-swift\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386565 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-swiftconf\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386594 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c7ed9b27-9804-4584-a244-30ba1f033e17-cache\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386645 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xcfm\" (UniqueName: \"kubernetes.io/projected/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-kube-api-access-7xcfm\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386667 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386695 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-ring-data-devices\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.386722 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-combined-ca-bundle\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: E1216 12:13:56.389875 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 16 12:13:56 crc kubenswrapper[4805]: E1216 12:13:56.389900 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 16 12:13:56 crc kubenswrapper[4805]: E1216 12:13:56.389949 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift podName:c7ed9b27-9804-4584-a244-30ba1f033e17 nodeName:}" failed. No retries permitted until 2025-12-16 12:13:56.889930027 +0000 UTC m=+1110.608187832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift") pod "swift-storage-0" (UID: "c7ed9b27-9804-4584-a244-30ba1f033e17") : configmap "swift-ring-files" not found Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.390643 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c7ed9b27-9804-4584-a244-30ba1f033e17-lock\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.391044 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c7ed9b27-9804-4584-a244-30ba1f033e17-cache\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.391283 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.470446 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.488900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-scripts\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.491366 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-dispersionconf\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.491694 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-etc-swift\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.491734 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-swiftconf\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.491833 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xcfm\" (UniqueName: \"kubernetes.io/projected/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-kube-api-access-7xcfm\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.491881 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-ring-data-devices\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.491917 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-combined-ca-bundle\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.498257 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcgm9\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-kube-api-access-kcgm9\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.506865 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-scripts\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.508712 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-etc-swift\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.509385 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-ring-data-devices\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.512721 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-swiftconf\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.517163 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-dispersionconf\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.520160 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-combined-ca-bundle\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.548360 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xcfm\" (UniqueName: \"kubernetes.io/projected/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-kube-api-access-7xcfm\") pod \"swift-ring-rebalance-xsqmm\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:56 crc kubenswrapper[4805]: I1216 12:13:56.566250 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:13:57 crc kubenswrapper[4805]: I1216 12:13:57.138855 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:57 crc kubenswrapper[4805]: E1216 12:13:57.139382 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 16 12:13:57 crc kubenswrapper[4805]: E1216 12:13:57.139396 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 16 12:13:57 crc kubenswrapper[4805]: E1216 12:13:57.139438 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift podName:c7ed9b27-9804-4584-a244-30ba1f033e17 nodeName:}" failed. No retries permitted until 2025-12-16 12:13:58.139425327 +0000 UTC m=+1111.857683132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift") pod "swift-storage-0" (UID: "c7ed9b27-9804-4584-a244-30ba1f033e17") : configmap "swift-ring-files" not found Dec 16 12:13:57 crc kubenswrapper[4805]: I1216 12:13:57.207634 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 16 12:13:57 crc kubenswrapper[4805]: I1216 12:13:57.559320 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqth5"] Dec 16 12:13:57 crc kubenswrapper[4805]: I1216 12:13:57.597677 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-xsqmm"] Dec 16 12:13:58 crc kubenswrapper[4805]: I1216 12:13:58.171879 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:13:58 crc kubenswrapper[4805]: E1216 12:13:58.172086 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 16 12:13:58 crc kubenswrapper[4805]: E1216 12:13:58.172101 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 16 12:13:58 crc kubenswrapper[4805]: E1216 12:13:58.172156 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift podName:c7ed9b27-9804-4584-a244-30ba1f033e17 nodeName:}" failed. No retries permitted until 2025-12-16 12:14:00.17212837 +0000 UTC m=+1113.890386175 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift") pod "swift-storage-0" (UID: "c7ed9b27-9804-4584-a244-30ba1f033e17") : configmap "swift-ring-files" not found Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.194856 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xsqmm" event={"ID":"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5","Type":"ContainerStarted","Data":"84af452ed5d2829d90c12d8b6c26508a609a56c5ad7d1d48c7097b343c275d09"} Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.201957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c9ccbdb-6316-4956-bc60-507faeeba295","Type":"ContainerStarted","Data":"5a7c033e3c68fa08b3de14f0a991b917a86c474d7294998ebefb87d6e5ec34c5"} Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.202211 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.206378 4805 generic.go:334] "Generic (PLEG): container finished" podID="1cb53c8d-e006-4a9a-b2f9-4d7c34732801" containerID="3246f8e2a878b7d76e068ccd9c6ffff32156075730f1d1b7b3534ea4e986610e" exitCode=0 Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.206450 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" event={"ID":"1cb53c8d-e006-4a9a-b2f9-4d7c34732801","Type":"ContainerDied","Data":"3246f8e2a878b7d76e068ccd9c6ffff32156075730f1d1b7b3534ea4e986610e"} Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.210847 4805 generic.go:334] "Generic (PLEG): container finished" podID="dd66837a-9e6f-41fd-91a0-f010e02a3a80" containerID="5d2a32a6781c31f61b711f10b716b059c8e5d0cc0305e51ece70e16c4aa23c56" exitCode=0 Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.210905 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd66837a-9e6f-41fd-91a0-f010e02a3a80","Type":"ContainerDied","Data":"5d2a32a6781c31f61b711f10b716b059c8e5d0cc0305e51ece70e16c4aa23c56"} Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.217024 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9xw22" event={"ID":"f3e4543d-c48b-45a2-8eea-2584d5bba4b6","Type":"ContainerStarted","Data":"2e10b7150a2ad292ede714e28e7717eb963786cc44d33c88f65188b1b265700e"} Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.220462 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9xw22" Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.227281 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.9896290030000001 podStartE2EDuration="56.227265269s" podCreationTimestamp="2025-12-16 12:13:03 +0000 UTC" firstStartedPulling="2025-12-16 12:13:04.280784581 +0000 UTC m=+1057.999042376" lastFinishedPulling="2025-12-16 12:13:58.518420837 +0000 UTC m=+1112.236678642" observedRunningTime="2025-12-16 12:13:59.220966978 +0000 UTC m=+1112.939224793" watchObservedRunningTime="2025-12-16 12:13:59.227265269 +0000 UTC m=+1112.945523094" Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.228506 4805 generic.go:334] "Generic (PLEG): container finished" podID="ebaf00e1-0933-49b6-9c08-9e08ed8a7890" containerID="e7ce758e42537e945d64d94b1d5703dc2f89abb180eba32bbe9ca6b917510a0f" exitCode=0 Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.228582 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-x944g" event={"ID":"ebaf00e1-0933-49b6-9c08-9e08ed8a7890","Type":"ContainerDied","Data":"e7ce758e42537e945d64d94b1d5703dc2f89abb180eba32bbe9ca6b917510a0f"} Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.247950 4805 generic.go:334] "Generic (PLEG): container finished" podID="a2cc735d-1342-4b12-a93e-d115daf8c3f3" containerID="95bc720f0f0ba7a2c79bfc4c674ebccfc0875c1f3dac2955e4a1955b39c05dd2" exitCode=0 Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.248071 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jqth5" event={"ID":"a2cc735d-1342-4b12-a93e-d115daf8c3f3","Type":"ContainerDied","Data":"95bc720f0f0ba7a2c79bfc4c674ebccfc0875c1f3dac2955e4a1955b39c05dd2"} Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.248105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jqth5" event={"ID":"a2cc735d-1342-4b12-a93e-d115daf8c3f3","Type":"ContainerStarted","Data":"4f505838886205fd731874db942ea056738266435ceebfe436941e1b427a5b3e"} Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.256501 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"87517712-55f8-42c7-8a23-cb388090ed3c","Type":"ContainerStarted","Data":"87c9295df02e91ef32481c4ec9c34637d0214f251285ddc8fc91bb7a07d9cf04"} Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.274872 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9xw22" podStartSLOduration=4.188366476 podStartE2EDuration="54.274848537s" podCreationTimestamp="2025-12-16 12:13:05 +0000 UTC" firstStartedPulling="2025-12-16 12:13:08.63588283 +0000 UTC m=+1062.354140635" lastFinishedPulling="2025-12-16 12:13:58.722364891 +0000 UTC m=+1112.440622696" observedRunningTime="2025-12-16 12:13:59.244592357 +0000 UTC m=+1112.962850182" watchObservedRunningTime="2025-12-16 12:13:59.274848537 +0000 UTC m=+1112.993106352" Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.822328 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:13:59 crc kubenswrapper[4805]: E1216 12:13:59.909332 4805 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 16 12:13:59 crc kubenswrapper[4805]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/1cb53c8d-e006-4a9a-b2f9-4d7c34732801/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 16 12:13:59 crc kubenswrapper[4805]: > podSandboxID="99a6a0ae3cbb743b50fd3c2a5b8d026d471cee8029a53c7eb46c220760bb9c82" Dec 16 12:13:59 crc kubenswrapper[4805]: E1216 12:13:59.909520 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 16 12:13:59 crc kubenswrapper[4805]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7dfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-9x7sc_openstack(1cb53c8d-e006-4a9a-b2f9-4d7c34732801): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/1cb53c8d-e006-4a9a-b2f9-4d7c34732801/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 16 12:13:59 crc kubenswrapper[4805]: > logger="UnhandledError" Dec 16 12:13:59 crc kubenswrapper[4805]: E1216 12:13:59.910757 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/1cb53c8d-e006-4a9a-b2f9-4d7c34732801/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" podUID="1cb53c8d-e006-4a9a-b2f9-4d7c34732801" Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.974003 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plzjq\" (UniqueName: \"kubernetes.io/projected/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-kube-api-access-plzjq\") pod \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.974175 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-dns-svc\") pod \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.974220 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-ovsdbserver-nb\") pod \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.974261 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-config\") pod \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\" (UID: \"ebaf00e1-0933-49b6-9c08-9e08ed8a7890\") " Dec 16 12:13:59 crc kubenswrapper[4805]: I1216 12:13:59.991969 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-kube-api-access-plzjq" (OuterVolumeSpecName: "kube-api-access-plzjq") pod "ebaf00e1-0933-49b6-9c08-9e08ed8a7890" (UID: "ebaf00e1-0933-49b6-9c08-9e08ed8a7890"). InnerVolumeSpecName "kube-api-access-plzjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.004344 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-config" (OuterVolumeSpecName: "config") pod "ebaf00e1-0933-49b6-9c08-9e08ed8a7890" (UID: "ebaf00e1-0933-49b6-9c08-9e08ed8a7890"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.012618 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebaf00e1-0933-49b6-9c08-9e08ed8a7890" (UID: "ebaf00e1-0933-49b6-9c08-9e08ed8a7890"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.018671 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ebaf00e1-0933-49b6-9c08-9e08ed8a7890" (UID: "ebaf00e1-0933-49b6-9c08-9e08ed8a7890"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.076482 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.076530 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.076545 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.076556 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plzjq\" (UniqueName: \"kubernetes.io/projected/ebaf00e1-0933-49b6-9c08-9e08ed8a7890-kube-api-access-plzjq\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.177778 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:14:00 crc kubenswrapper[4805]: E1216 12:14:00.178103 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 16 12:14:00 crc kubenswrapper[4805]: E1216 12:14:00.178131 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 16 12:14:00 crc kubenswrapper[4805]: E1216 12:14:00.178195 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift podName:c7ed9b27-9804-4584-a244-30ba1f033e17 nodeName:}" failed. No retries permitted until 2025-12-16 12:14:04.178178511 +0000 UTC m=+1117.896436316 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift") pod "swift-storage-0" (UID: "c7ed9b27-9804-4584-a244-30ba1f033e17") : configmap "swift-ring-files" not found Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.272577 4805 generic.go:334] "Generic (PLEG): container finished" podID="2b009857-5d9e-4d6e-979c-d7fc3357bd66" containerID="ac22f1d397c8a12da84d5aec4a621a1d321d3c1407be31991889f6c4ab46eb8c" exitCode=0 Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.272646 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b009857-5d9e-4d6e-979c-d7fc3357bd66","Type":"ContainerDied","Data":"ac22f1d397c8a12da84d5aec4a621a1d321d3c1407be31991889f6c4ab46eb8c"} Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.278492 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd66837a-9e6f-41fd-91a0-f010e02a3a80","Type":"ContainerStarted","Data":"630f395e8e59e5d6e65f89b0ecf3d2f0583eaece51afac5cfe8b88f52100b2d2"} Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.292470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-x944g" event={"ID":"ebaf00e1-0933-49b6-9c08-9e08ed8a7890","Type":"ContainerDied","Data":"f61d67b8791201c4ded0c2fe7a50ca0df3f56e7c71761512a1cf34cad7cb0f41"} Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.292532 4805 scope.go:117] "RemoveContainer" containerID="e7ce758e42537e945d64d94b1d5703dc2f89abb180eba32bbe9ca6b917510a0f" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.292765 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-x944g" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.310463 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jqth5" event={"ID":"a2cc735d-1342-4b12-a93e-d115daf8c3f3","Type":"ContainerStarted","Data":"375cbe045015783db95ff1ad9b5888a35c69c91f6c9bf21d12078186d8d9feb1"} Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.346342 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=15.200767421 podStartE2EDuration="1m2.346319085s" podCreationTimestamp="2025-12-16 12:12:58 +0000 UTC" firstStartedPulling="2025-12-16 12:13:00.577596523 +0000 UTC m=+1054.295854328" lastFinishedPulling="2025-12-16 12:13:47.723148187 +0000 UTC m=+1101.441405992" observedRunningTime="2025-12-16 12:14:00.340628262 +0000 UTC m=+1114.058886067" watchObservedRunningTime="2025-12-16 12:14:00.346319085 +0000 UTC m=+1114.064576900" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.422991 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x944g"] Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.450059 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-x944g"] Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.475863 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-jqth5" podStartSLOduration=6.47584604 podStartE2EDuration="6.47584604s" podCreationTimestamp="2025-12-16 12:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:14:00.469357413 +0000 UTC m=+1114.187615218" watchObservedRunningTime="2025-12-16 12:14:00.47584604 +0000 UTC m=+1114.194103855" Dec 16 12:14:00 crc kubenswrapper[4805]: I1216 12:14:00.591339 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebaf00e1-0933-49b6-9c08-9e08ed8a7890" path="/var/lib/kubelet/pods/ebaf00e1-0933-49b6-9c08-9e08ed8a7890/volumes" Dec 16 12:14:01 crc kubenswrapper[4805]: I1216 12:14:01.327504 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b009857-5d9e-4d6e-979c-d7fc3357bd66","Type":"ContainerStarted","Data":"80a62096c1c53f5263b4ed72772570656bf30eff1ae836d31e152be58664f01b"} Dec 16 12:14:01 crc kubenswrapper[4805]: I1216 12:14:01.331236 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" event={"ID":"1cb53c8d-e006-4a9a-b2f9-4d7c34732801","Type":"ContainerStarted","Data":"fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7"} Dec 16 12:14:01 crc kubenswrapper[4805]: I1216 12:14:01.331570 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:14:01 crc kubenswrapper[4805]: I1216 12:14:01.332949 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:14:01 crc kubenswrapper[4805]: I1216 12:14:01.349963 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.133964921 podStartE2EDuration="1m2.349925993s" podCreationTimestamp="2025-12-16 12:12:59 +0000 UTC" firstStartedPulling="2025-12-16 12:13:02.382954741 +0000 UTC m=+1056.101212556" lastFinishedPulling="2025-12-16 12:13:47.598915823 +0000 UTC m=+1101.317173628" observedRunningTime="2025-12-16 12:14:01.346176615 +0000 UTC m=+1115.064434420" watchObservedRunningTime="2025-12-16 12:14:01.349925993 +0000 UTC m=+1115.068183808" Dec 16 12:14:01 crc kubenswrapper[4805]: I1216 12:14:01.354297 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 16 12:14:01 crc kubenswrapper[4805]: I1216 12:14:01.354331 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 16 12:14:01 crc kubenswrapper[4805]: I1216 12:14:01.376321 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" podStartSLOduration=4.923551153 podStartE2EDuration="12.376301831s" podCreationTimestamp="2025-12-16 12:13:49 +0000 UTC" firstStartedPulling="2025-12-16 12:13:51.019896383 +0000 UTC m=+1104.738154188" lastFinishedPulling="2025-12-16 12:13:58.472647051 +0000 UTC m=+1112.190904866" observedRunningTime="2025-12-16 12:14:01.367244971 +0000 UTC m=+1115.085502786" watchObservedRunningTime="2025-12-16 12:14:01.376301831 +0000 UTC m=+1115.094559646" Dec 16 12:14:03 crc kubenswrapper[4805]: I1216 12:14:03.549408 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 16 12:14:04 crc kubenswrapper[4805]: I1216 12:14:04.273521 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:14:04 crc kubenswrapper[4805]: E1216 12:14:04.273791 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 16 12:14:04 crc kubenswrapper[4805]: E1216 12:14:04.274530 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 16 12:14:04 crc kubenswrapper[4805]: E1216 12:14:04.274597 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift podName:c7ed9b27-9804-4584-a244-30ba1f033e17 nodeName:}" failed. No retries permitted until 2025-12-16 12:14:12.274574195 +0000 UTC m=+1125.992832000 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift") pod "swift-storage-0" (UID: "c7ed9b27-9804-4584-a244-30ba1f033e17") : configmap "swift-ring-files" not found Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.254746 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.266709 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.354023 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x7sc"] Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.415517 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"87517712-55f8-42c7-8a23-cb388090ed3c","Type":"ContainerStarted","Data":"59b0d004275f334fc9e134ca892da6acd63474c3abe1f550a94fa258a83cff92"} Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.432749 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xsqmm" event={"ID":"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5","Type":"ContainerStarted","Data":"79501e89bb195abde97ce84fea2a7a6bbc7e05868467693599a398ee603ec92e"} Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.432773 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" podUID="1cb53c8d-e006-4a9a-b2f9-4d7c34732801" containerName="dnsmasq-dns" containerID="cri-o://fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7" gracePeriod=10 Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.474044 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-xsqmm" podStartSLOduration=3.888290972 podStartE2EDuration="10.474024973s" podCreationTimestamp="2025-12-16 12:13:55 +0000 UTC" firstStartedPulling="2025-12-16 12:13:58.290620937 +0000 UTC m=+1112.008878742" lastFinishedPulling="2025-12-16 12:14:04.876354928 +0000 UTC m=+1118.594612743" observedRunningTime="2025-12-16 12:14:05.467924697 +0000 UTC m=+1119.186182512" watchObservedRunningTime="2025-12-16 12:14:05.474024973 +0000 UTC m=+1119.192282788" Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.842333 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.965988 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-dns-svc\") pod \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.966311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-sb\") pod \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.966341 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7dfv\" (UniqueName: \"kubernetes.io/projected/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-kube-api-access-d7dfv\") pod \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.966375 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-nb\") pod \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.966447 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-config\") pod \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\" (UID: \"1cb53c8d-e006-4a9a-b2f9-4d7c34732801\") " Dec 16 12:14:05 crc kubenswrapper[4805]: I1216 12:14:05.982374 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-kube-api-access-d7dfv" (OuterVolumeSpecName: "kube-api-access-d7dfv") pod "1cb53c8d-e006-4a9a-b2f9-4d7c34732801" (UID: "1cb53c8d-e006-4a9a-b2f9-4d7c34732801"). InnerVolumeSpecName "kube-api-access-d7dfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.075015 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7dfv\" (UniqueName: \"kubernetes.io/projected/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-kube-api-access-d7dfv\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.175444 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1cb53c8d-e006-4a9a-b2f9-4d7c34732801" (UID: "1cb53c8d-e006-4a9a-b2f9-4d7c34732801"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.175882 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.176231 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1cb53c8d-e006-4a9a-b2f9-4d7c34732801" (UID: "1cb53c8d-e006-4a9a-b2f9-4d7c34732801"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.176455 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-config" (OuterVolumeSpecName: "config") pod "1cb53c8d-e006-4a9a-b2f9-4d7c34732801" (UID: "1cb53c8d-e006-4a9a-b2f9-4d7c34732801"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.193559 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1cb53c8d-e006-4a9a-b2f9-4d7c34732801" (UID: "1cb53c8d-e006-4a9a-b2f9-4d7c34732801"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.276577 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.276611 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.276622 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb53c8d-e006-4a9a-b2f9-4d7c34732801-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.443739 4805 generic.go:334] "Generic (PLEG): container finished" podID="1cb53c8d-e006-4a9a-b2f9-4d7c34732801" containerID="fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7" exitCode=0 Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.443817 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" event={"ID":"1cb53c8d-e006-4a9a-b2f9-4d7c34732801","Type":"ContainerDied","Data":"fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7"} Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.443852 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" event={"ID":"1cb53c8d-e006-4a9a-b2f9-4d7c34732801","Type":"ContainerDied","Data":"99a6a0ae3cbb743b50fd3c2a5b8d026d471cee8029a53c7eb46c220760bb9c82"} Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.444032 4805 scope.go:117] "RemoveContainer" containerID="fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.444223 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9x7sc" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.447568 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"87517712-55f8-42c7-8a23-cb388090ed3c","Type":"ContainerStarted","Data":"611b8ede50d0b90abc51d52535d2345c92c472509fcbfe8817b67c70ae654bf9"} Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.447659 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.473873 4805 scope.go:117] "RemoveContainer" containerID="3246f8e2a878b7d76e068ccd9c6ffff32156075730f1d1b7b3534ea4e986610e" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.484453 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.805402492 podStartE2EDuration="12.484432505s" podCreationTimestamp="2025-12-16 12:13:54 +0000 UTC" firstStartedPulling="2025-12-16 12:13:58.203375429 +0000 UTC m=+1111.921633234" lastFinishedPulling="2025-12-16 12:14:04.882405442 +0000 UTC m=+1118.600663247" observedRunningTime="2025-12-16 12:14:06.477684271 +0000 UTC m=+1120.195942086" watchObservedRunningTime="2025-12-16 12:14:06.484432505 +0000 UTC m=+1120.202690320" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.504997 4805 scope.go:117] "RemoveContainer" containerID="fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7" Dec 16 12:14:06 crc kubenswrapper[4805]: E1216 12:14:06.505486 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7\": container with ID starting with fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7 not found: ID does not exist" containerID="fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.505532 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7"} err="failed to get container status \"fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7\": rpc error: code = NotFound desc = could not find container \"fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7\": container with ID starting with fc4b677964110327b90277c801504b539b3d6584d6929e2509264d8b173524a7 not found: ID does not exist" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.505563 4805 scope.go:117] "RemoveContainer" containerID="3246f8e2a878b7d76e068ccd9c6ffff32156075730f1d1b7b3534ea4e986610e" Dec 16 12:14:06 crc kubenswrapper[4805]: E1216 12:14:06.505900 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3246f8e2a878b7d76e068ccd9c6ffff32156075730f1d1b7b3534ea4e986610e\": container with ID starting with 3246f8e2a878b7d76e068ccd9c6ffff32156075730f1d1b7b3534ea4e986610e not found: ID does not exist" containerID="3246f8e2a878b7d76e068ccd9c6ffff32156075730f1d1b7b3534ea4e986610e" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.505941 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3246f8e2a878b7d76e068ccd9c6ffff32156075730f1d1b7b3534ea4e986610e"} err="failed to get container status \"3246f8e2a878b7d76e068ccd9c6ffff32156075730f1d1b7b3534ea4e986610e\": rpc error: code = NotFound desc = could not find container \"3246f8e2a878b7d76e068ccd9c6ffff32156075730f1d1b7b3534ea4e986610e\": container with ID starting with 3246f8e2a878b7d76e068ccd9c6ffff32156075730f1d1b7b3534ea4e986610e not found: ID does not exist" Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.522057 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x7sc"] Dec 16 12:14:06 crc kubenswrapper[4805]: I1216 12:14:06.547301 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9x7sc"] Dec 16 12:14:08 crc kubenswrapper[4805]: I1216 12:14:08.531374 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb53c8d-e006-4a9a-b2f9-4d7c34732801" path="/var/lib/kubelet/pods/1cb53c8d-e006-4a9a-b2f9-4d7c34732801/volumes" Dec 16 12:14:09 crc kubenswrapper[4805]: I1216 12:14:09.762106 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 16 12:14:09 crc kubenswrapper[4805]: I1216 12:14:09.762493 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 16 12:14:09 crc kubenswrapper[4805]: I1216 12:14:09.805050 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 16 12:14:10 crc kubenswrapper[4805]: I1216 12:14:10.599364 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 16 12:14:10 crc kubenswrapper[4805]: I1216 12:14:10.622204 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 16 12:14:10 crc kubenswrapper[4805]: I1216 12:14:10.652370 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.217176 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-b6p7d"] Dec 16 12:14:11 crc kubenswrapper[4805]: E1216 12:14:11.217545 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb53c8d-e006-4a9a-b2f9-4d7c34732801" containerName="dnsmasq-dns" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.217558 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb53c8d-e006-4a9a-b2f9-4d7c34732801" containerName="dnsmasq-dns" Dec 16 12:14:11 crc kubenswrapper[4805]: E1216 12:14:11.217570 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb53c8d-e006-4a9a-b2f9-4d7c34732801" containerName="init" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.217577 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb53c8d-e006-4a9a-b2f9-4d7c34732801" containerName="init" Dec 16 12:14:11 crc kubenswrapper[4805]: E1216 12:14:11.217595 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebaf00e1-0933-49b6-9c08-9e08ed8a7890" containerName="init" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.217607 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebaf00e1-0933-49b6-9c08-9e08ed8a7890" containerName="init" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.217777 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebaf00e1-0933-49b6-9c08-9e08ed8a7890" containerName="init" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.217792 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb53c8d-e006-4a9a-b2f9-4d7c34732801" containerName="dnsmasq-dns" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.218329 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b6p7d" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.242933 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-b6p7d"] Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.348604 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmvfk\" (UniqueName: \"kubernetes.io/projected/70a53daa-fa15-46b2-b365-bca57d860620-kube-api-access-dmvfk\") pod \"keystone-db-create-b6p7d\" (UID: \"70a53daa-fa15-46b2-b365-bca57d860620\") " pod="openstack/keystone-db-create-b6p7d" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.418503 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6fqc4"] Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.419891 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fqc4" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.429815 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6fqc4"] Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.450392 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmvfk\" (UniqueName: \"kubernetes.io/projected/70a53daa-fa15-46b2-b365-bca57d860620-kube-api-access-dmvfk\") pod \"keystone-db-create-b6p7d\" (UID: \"70a53daa-fa15-46b2-b365-bca57d860620\") " pod="openstack/keystone-db-create-b6p7d" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.474912 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmvfk\" (UniqueName: \"kubernetes.io/projected/70a53daa-fa15-46b2-b365-bca57d860620-kube-api-access-dmvfk\") pod \"keystone-db-create-b6p7d\" (UID: \"70a53daa-fa15-46b2-b365-bca57d860620\") " pod="openstack/keystone-db-create-b6p7d" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.539718 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b6p7d" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.552124 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtqr7\" (UniqueName: \"kubernetes.io/projected/3b651a61-b4a2-45c6-a349-fde447509d3c-kube-api-access-qtqr7\") pod \"placement-db-create-6fqc4\" (UID: \"3b651a61-b4a2-45c6-a349-fde447509d3c\") " pod="openstack/placement-db-create-6fqc4" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.656097 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtqr7\" (UniqueName: \"kubernetes.io/projected/3b651a61-b4a2-45c6-a349-fde447509d3c-kube-api-access-qtqr7\") pod \"placement-db-create-6fqc4\" (UID: \"3b651a61-b4a2-45c6-a349-fde447509d3c\") " pod="openstack/placement-db-create-6fqc4" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.690939 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtqr7\" (UniqueName: \"kubernetes.io/projected/3b651a61-b4a2-45c6-a349-fde447509d3c-kube-api-access-qtqr7\") pod \"placement-db-create-6fqc4\" (UID: \"3b651a61-b4a2-45c6-a349-fde447509d3c\") " pod="openstack/placement-db-create-6fqc4" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.745370 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fqc4" Dec 16 12:14:11 crc kubenswrapper[4805]: I1216 12:14:11.837106 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-b6p7d"] Dec 16 12:14:12 crc kubenswrapper[4805]: I1216 12:14:12.350216 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6fqc4"] Dec 16 12:14:12 crc kubenswrapper[4805]: I1216 12:14:12.368603 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:14:12 crc kubenswrapper[4805]: E1216 12:14:12.368884 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 16 12:14:12 crc kubenswrapper[4805]: E1216 12:14:12.368919 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 16 12:14:12 crc kubenswrapper[4805]: E1216 12:14:12.368963 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift podName:c7ed9b27-9804-4584-a244-30ba1f033e17 nodeName:}" failed. No retries permitted until 2025-12-16 12:14:28.368949144 +0000 UTC m=+1142.087206939 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift") pod "swift-storage-0" (UID: "c7ed9b27-9804-4584-a244-30ba1f033e17") : configmap "swift-ring-files" not found Dec 16 12:14:12 crc kubenswrapper[4805]: I1216 12:14:12.504654 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fqc4" event={"ID":"3b651a61-b4a2-45c6-a349-fde447509d3c","Type":"ContainerStarted","Data":"45cf9afe4fadbe725c0e94996d5bc3efce1008374d2a7ea8846cffd451a4eb45"} Dec 16 12:14:12 crc kubenswrapper[4805]: I1216 12:14:12.507734 4805 generic.go:334] "Generic (PLEG): container finished" podID="70a53daa-fa15-46b2-b365-bca57d860620" containerID="199b6e47cf4a7f85b91da8362f267238e4568e4147890235d02eeebb174b4418" exitCode=0 Dec 16 12:14:12 crc kubenswrapper[4805]: I1216 12:14:12.507784 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b6p7d" event={"ID":"70a53daa-fa15-46b2-b365-bca57d860620","Type":"ContainerDied","Data":"199b6e47cf4a7f85b91da8362f267238e4568e4147890235d02eeebb174b4418"} Dec 16 12:14:12 crc kubenswrapper[4805]: I1216 12:14:12.507806 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b6p7d" event={"ID":"70a53daa-fa15-46b2-b365-bca57d860620","Type":"ContainerStarted","Data":"0863334e09545f95dd14c10bcd41315ccf3fd96daed27253c872823c7600a29a"} Dec 16 12:14:13 crc kubenswrapper[4805]: I1216 12:14:13.520722 4805 generic.go:334] "Generic (PLEG): container finished" podID="3b651a61-b4a2-45c6-a349-fde447509d3c" containerID="29ccad7a5e65257ba490efa4bf6f5768e68751a2c1dedbe93844d47589aba742" exitCode=0 Dec 16 12:14:13 crc kubenswrapper[4805]: I1216 12:14:13.520777 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fqc4" event={"ID":"3b651a61-b4a2-45c6-a349-fde447509d3c","Type":"ContainerDied","Data":"29ccad7a5e65257ba490efa4bf6f5768e68751a2c1dedbe93844d47589aba742"} Dec 16 12:14:13 crc kubenswrapper[4805]: I1216 12:14:13.857652 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b6p7d" Dec 16 12:14:13 crc kubenswrapper[4805]: I1216 12:14:13.996183 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmvfk\" (UniqueName: \"kubernetes.io/projected/70a53daa-fa15-46b2-b365-bca57d860620-kube-api-access-dmvfk\") pod \"70a53daa-fa15-46b2-b365-bca57d860620\" (UID: \"70a53daa-fa15-46b2-b365-bca57d860620\") " Dec 16 12:14:14 crc kubenswrapper[4805]: I1216 12:14:14.001310 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a53daa-fa15-46b2-b365-bca57d860620-kube-api-access-dmvfk" (OuterVolumeSpecName: "kube-api-access-dmvfk") pod "70a53daa-fa15-46b2-b365-bca57d860620" (UID: "70a53daa-fa15-46b2-b365-bca57d860620"). InnerVolumeSpecName "kube-api-access-dmvfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:14 crc kubenswrapper[4805]: I1216 12:14:14.099038 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmvfk\" (UniqueName: \"kubernetes.io/projected/70a53daa-fa15-46b2-b365-bca57d860620-kube-api-access-dmvfk\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:14 crc kubenswrapper[4805]: I1216 12:14:14.531479 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-b6p7d" Dec 16 12:14:14 crc kubenswrapper[4805]: I1216 12:14:14.532280 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-b6p7d" event={"ID":"70a53daa-fa15-46b2-b365-bca57d860620","Type":"ContainerDied","Data":"0863334e09545f95dd14c10bcd41315ccf3fd96daed27253c872823c7600a29a"} Dec 16 12:14:14 crc kubenswrapper[4805]: I1216 12:14:14.532322 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0863334e09545f95dd14c10bcd41315ccf3fd96daed27253c872823c7600a29a" Dec 16 12:14:14 crc kubenswrapper[4805]: I1216 12:14:14.533516 4805 generic.go:334] "Generic (PLEG): container finished" podID="09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5" containerID="79501e89bb195abde97ce84fea2a7a6bbc7e05868467693599a398ee603ec92e" exitCode=0 Dec 16 12:14:14 crc kubenswrapper[4805]: I1216 12:14:14.533549 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xsqmm" event={"ID":"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5","Type":"ContainerDied","Data":"79501e89bb195abde97ce84fea2a7a6bbc7e05868467693599a398ee603ec92e"} Dec 16 12:14:14 crc kubenswrapper[4805]: E1216 12:14:14.603588 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70a53daa_fa15_46b2_b365_bca57d860620.slice/crio-0863334e09545f95dd14c10bcd41315ccf3fd96daed27253c872823c7600a29a\": RecentStats: unable to find data in memory cache]" Dec 16 12:14:14 crc kubenswrapper[4805]: I1216 12:14:14.900759 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fqc4" Dec 16 12:14:15 crc kubenswrapper[4805]: I1216 12:14:15.016386 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtqr7\" (UniqueName: \"kubernetes.io/projected/3b651a61-b4a2-45c6-a349-fde447509d3c-kube-api-access-qtqr7\") pod \"3b651a61-b4a2-45c6-a349-fde447509d3c\" (UID: \"3b651a61-b4a2-45c6-a349-fde447509d3c\") " Dec 16 12:14:15 crc kubenswrapper[4805]: I1216 12:14:15.028738 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b651a61-b4a2-45c6-a349-fde447509d3c-kube-api-access-qtqr7" (OuterVolumeSpecName: "kube-api-access-qtqr7") pod "3b651a61-b4a2-45c6-a349-fde447509d3c" (UID: "3b651a61-b4a2-45c6-a349-fde447509d3c"). InnerVolumeSpecName "kube-api-access-qtqr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:15 crc kubenswrapper[4805]: I1216 12:14:15.118434 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtqr7\" (UniqueName: \"kubernetes.io/projected/3b651a61-b4a2-45c6-a349-fde447509d3c-kube-api-access-qtqr7\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:15 crc kubenswrapper[4805]: I1216 12:14:15.542745 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fqc4" event={"ID":"3b651a61-b4a2-45c6-a349-fde447509d3c","Type":"ContainerDied","Data":"45cf9afe4fadbe725c0e94996d5bc3efce1008374d2a7ea8846cffd451a4eb45"} Dec 16 12:14:15 crc kubenswrapper[4805]: I1216 12:14:15.542794 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45cf9afe4fadbe725c0e94996d5bc3efce1008374d2a7ea8846cffd451a4eb45" Dec 16 12:14:15 crc kubenswrapper[4805]: I1216 12:14:15.542770 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fqc4" Dec 16 12:14:15 crc kubenswrapper[4805]: I1216 12:14:15.544056 4805 generic.go:334] "Generic (PLEG): container finished" podID="93a741b0-5bcd-407b-8af7-90bd52380217" containerID="87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644" exitCode=0 Dec 16 12:14:15 crc kubenswrapper[4805]: I1216 12:14:15.544212 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"93a741b0-5bcd-407b-8af7-90bd52380217","Type":"ContainerDied","Data":"87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644"} Dec 16 12:14:15 crc kubenswrapper[4805]: I1216 12:14:15.915942 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.033339 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-scripts\") pod \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.033383 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xcfm\" (UniqueName: \"kubernetes.io/projected/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-kube-api-access-7xcfm\") pod \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.033411 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-ring-data-devices\") pod \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.033457 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-dispersionconf\") pod \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.033486 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-swiftconf\") pod \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.033536 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-combined-ca-bundle\") pod \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.033622 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-etc-swift\") pod \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\" (UID: \"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5\") " Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.034776 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5" (UID: "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.035276 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5" (UID: "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.041978 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5" (UID: "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.044450 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-kube-api-access-7xcfm" (OuterVolumeSpecName: "kube-api-access-7xcfm") pod "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5" (UID: "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5"). InnerVolumeSpecName "kube-api-access-7xcfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.055927 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-scripts" (OuterVolumeSpecName: "scripts") pod "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5" (UID: "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.056769 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5" (UID: "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.077195 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5" (UID: "09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.136088 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.136126 4805 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.136156 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.136170 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xcfm\" (UniqueName: \"kubernetes.io/projected/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-kube-api-access-7xcfm\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.136184 4805 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.136196 4805 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.136207 4805 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.551529 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-xsqmm" event={"ID":"09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5","Type":"ContainerDied","Data":"84af452ed5d2829d90c12d8b6c26508a609a56c5ad7d1d48c7097b343c275d09"} Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.551739 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84af452ed5d2829d90c12d8b6c26508a609a56c5ad7d1d48c7097b343c275d09" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.551764 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-xsqmm" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.553537 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"93a741b0-5bcd-407b-8af7-90bd52380217","Type":"ContainerStarted","Data":"16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686"} Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.553740 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.590391 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371957.264408 podStartE2EDuration="1m19.590368593s" podCreationTimestamp="2025-12-16 12:12:57 +0000 UTC" firstStartedPulling="2025-12-16 12:12:59.299283333 +0000 UTC m=+1053.017541138" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:14:16.587646035 +0000 UTC m=+1130.305903840" watchObservedRunningTime="2025-12-16 12:14:16.590368593 +0000 UTC m=+1130.308626418" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.801178 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pk8zf"] Dec 16 12:14:16 crc kubenswrapper[4805]: E1216 12:14:16.801583 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b651a61-b4a2-45c6-a349-fde447509d3c" containerName="mariadb-database-create" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.801600 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b651a61-b4a2-45c6-a349-fde447509d3c" containerName="mariadb-database-create" Dec 16 12:14:16 crc kubenswrapper[4805]: E1216 12:14:16.801618 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5" containerName="swift-ring-rebalance" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.801627 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5" containerName="swift-ring-rebalance" Dec 16 12:14:16 crc kubenswrapper[4805]: E1216 12:14:16.801647 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a53daa-fa15-46b2-b365-bca57d860620" containerName="mariadb-database-create" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.801657 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a53daa-fa15-46b2-b365-bca57d860620" containerName="mariadb-database-create" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.801882 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b651a61-b4a2-45c6-a349-fde447509d3c" containerName="mariadb-database-create" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.801899 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5" containerName="swift-ring-rebalance" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.801916 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="70a53daa-fa15-46b2-b365-bca57d860620" containerName="mariadb-database-create" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.802589 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pk8zf" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.818732 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pk8zf"] Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.845126 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2ktg\" (UniqueName: \"kubernetes.io/projected/22ff15b5-9ba4-47c0-87a0-7d4400ce9e17-kube-api-access-q2ktg\") pod \"glance-db-create-pk8zf\" (UID: \"22ff15b5-9ba4-47c0-87a0-7d4400ce9e17\") " pod="openstack/glance-db-create-pk8zf" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.947327 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2ktg\" (UniqueName: \"kubernetes.io/projected/22ff15b5-9ba4-47c0-87a0-7d4400ce9e17-kube-api-access-q2ktg\") pod \"glance-db-create-pk8zf\" (UID: \"22ff15b5-9ba4-47c0-87a0-7d4400ce9e17\") " pod="openstack/glance-db-create-pk8zf" Dec 16 12:14:16 crc kubenswrapper[4805]: I1216 12:14:16.982640 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2ktg\" (UniqueName: \"kubernetes.io/projected/22ff15b5-9ba4-47c0-87a0-7d4400ce9e17-kube-api-access-q2ktg\") pod \"glance-db-create-pk8zf\" (UID: \"22ff15b5-9ba4-47c0-87a0-7d4400ce9e17\") " pod="openstack/glance-db-create-pk8zf" Dec 16 12:14:17 crc kubenswrapper[4805]: I1216 12:14:17.117776 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pk8zf" Dec 16 12:14:17 crc kubenswrapper[4805]: I1216 12:14:17.567270 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pk8zf" event={"ID":"22ff15b5-9ba4-47c0-87a0-7d4400ce9e17","Type":"ContainerStarted","Data":"7c30fb07cbae72ee11cd0e468a764b969a4efce1124643319ef1585ab3b22284"} Dec 16 12:14:17 crc kubenswrapper[4805]: I1216 12:14:17.574577 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pk8zf"] Dec 16 12:14:18 crc kubenswrapper[4805]: I1216 12:14:18.575813 4805 generic.go:334] "Generic (PLEG): container finished" podID="22ff15b5-9ba4-47c0-87a0-7d4400ce9e17" containerID="8abee5f9389d4d68a11f937138fc5d7fea8b649f5ff7e4b00d4b1e4f4e9eb7e1" exitCode=0 Dec 16 12:14:18 crc kubenswrapper[4805]: I1216 12:14:18.575863 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pk8zf" event={"ID":"22ff15b5-9ba4-47c0-87a0-7d4400ce9e17","Type":"ContainerDied","Data":"8abee5f9389d4d68a11f937138fc5d7fea8b649f5ff7e4b00d4b1e4f4e9eb7e1"} Dec 16 12:14:19 crc kubenswrapper[4805]: I1216 12:14:19.497152 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 16 12:14:19 crc kubenswrapper[4805]: I1216 12:14:19.586359 4805 generic.go:334] "Generic (PLEG): container finished" podID="55267a44-aaa0-494b-922a-014b08eddcd9" containerID="ba41853f3136944c440c35ba755957daedde0ef9f029dbcc7b426ba254586616" exitCode=0 Dec 16 12:14:19 crc kubenswrapper[4805]: I1216 12:14:19.586552 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55267a44-aaa0-494b-922a-014b08eddcd9","Type":"ContainerDied","Data":"ba41853f3136944c440c35ba755957daedde0ef9f029dbcc7b426ba254586616"} Dec 16 12:14:20 crc kubenswrapper[4805]: I1216 12:14:20.078005 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pk8zf" Dec 16 12:14:20 crc kubenswrapper[4805]: I1216 12:14:20.216075 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2ktg\" (UniqueName: \"kubernetes.io/projected/22ff15b5-9ba4-47c0-87a0-7d4400ce9e17-kube-api-access-q2ktg\") pod \"22ff15b5-9ba4-47c0-87a0-7d4400ce9e17\" (UID: \"22ff15b5-9ba4-47c0-87a0-7d4400ce9e17\") " Dec 16 12:14:20 crc kubenswrapper[4805]: I1216 12:14:20.220728 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ff15b5-9ba4-47c0-87a0-7d4400ce9e17-kube-api-access-q2ktg" (OuterVolumeSpecName: "kube-api-access-q2ktg") pod "22ff15b5-9ba4-47c0-87a0-7d4400ce9e17" (UID: "22ff15b5-9ba4-47c0-87a0-7d4400ce9e17"). InnerVolumeSpecName "kube-api-access-q2ktg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:20 crc kubenswrapper[4805]: I1216 12:14:20.317817 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2ktg\" (UniqueName: \"kubernetes.io/projected/22ff15b5-9ba4-47c0-87a0-7d4400ce9e17-kube-api-access-q2ktg\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:20 crc kubenswrapper[4805]: I1216 12:14:20.597128 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pk8zf" Dec 16 12:14:20 crc kubenswrapper[4805]: I1216 12:14:20.597134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pk8zf" event={"ID":"22ff15b5-9ba4-47c0-87a0-7d4400ce9e17","Type":"ContainerDied","Data":"7c30fb07cbae72ee11cd0e468a764b969a4efce1124643319ef1585ab3b22284"} Dec 16 12:14:20 crc kubenswrapper[4805]: I1216 12:14:20.597190 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c30fb07cbae72ee11cd0e468a764b969a4efce1124643319ef1585ab3b22284" Dec 16 12:14:20 crc kubenswrapper[4805]: I1216 12:14:20.599208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55267a44-aaa0-494b-922a-014b08eddcd9","Type":"ContainerStarted","Data":"1e25954fb7d4a311e570da01a2cad43843fd77f007db317cf2ff46c78efb4f16"} Dec 16 12:14:20 crc kubenswrapper[4805]: I1216 12:14:20.600444 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 16 12:14:20 crc kubenswrapper[4805]: I1216 12:14:20.637243 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=44.964234527 podStartE2EDuration="1m24.637220283s" podCreationTimestamp="2025-12-16 12:12:56 +0000 UTC" firstStartedPulling="2025-12-16 12:12:59.086106381 +0000 UTC m=+1052.804364186" lastFinishedPulling="2025-12-16 12:13:38.759092137 +0000 UTC m=+1092.477349942" observedRunningTime="2025-12-16 12:14:20.628680877 +0000 UTC m=+1134.346938692" watchObservedRunningTime="2025-12-16 12:14:20.637220283 +0000 UTC m=+1134.355478108" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.107066 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.125670 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ffmtv" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.228073 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-546b-account-create-mx6tl"] Dec 16 12:14:21 crc kubenswrapper[4805]: E1216 12:14:21.228472 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ff15b5-9ba4-47c0-87a0-7d4400ce9e17" containerName="mariadb-database-create" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.228492 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ff15b5-9ba4-47c0-87a0-7d4400ce9e17" containerName="mariadb-database-create" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.228680 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="22ff15b5-9ba4-47c0-87a0-7d4400ce9e17" containerName="mariadb-database-create" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.229247 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-546b-account-create-mx6tl" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.237935 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.241512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9tpw\" (UniqueName: \"kubernetes.io/projected/327fe989-8d12-4624-adb2-d2fe4680168c-kube-api-access-b9tpw\") pod \"keystone-546b-account-create-mx6tl\" (UID: \"327fe989-8d12-4624-adb2-d2fe4680168c\") " pod="openstack/keystone-546b-account-create-mx6tl" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.242013 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-546b-account-create-mx6tl"] Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.342902 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9tpw\" (UniqueName: \"kubernetes.io/projected/327fe989-8d12-4624-adb2-d2fe4680168c-kube-api-access-b9tpw\") pod \"keystone-546b-account-create-mx6tl\" (UID: \"327fe989-8d12-4624-adb2-d2fe4680168c\") " pod="openstack/keystone-546b-account-create-mx6tl" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.378925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9tpw\" (UniqueName: \"kubernetes.io/projected/327fe989-8d12-4624-adb2-d2fe4680168c-kube-api-access-b9tpw\") pod \"keystone-546b-account-create-mx6tl\" (UID: \"327fe989-8d12-4624-adb2-d2fe4680168c\") " pod="openstack/keystone-546b-account-create-mx6tl" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.426295 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9xw22-config-kcsdj"] Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.429357 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.434517 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.460581 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9xw22-config-kcsdj"] Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.546102 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-scripts\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.546283 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run-ovn\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.546540 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-additional-scripts\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.546653 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.546690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-log-ovn\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.546732 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jss8\" (UniqueName: \"kubernetes.io/projected/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-kube-api-access-9jss8\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.563759 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-546b-account-create-mx6tl" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.627677 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1415-account-create-svvgs"] Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.630009 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1415-account-create-svvgs" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.633205 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.641034 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1415-account-create-svvgs"] Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.659756 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-additional-scripts\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.659858 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.659901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-log-ovn\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.659926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jss8\" (UniqueName: \"kubernetes.io/projected/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-kube-api-access-9jss8\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.660021 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-scripts\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.660126 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run-ovn\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.661124 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.661506 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-additional-scripts\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.663059 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-log-ovn\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.663126 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run-ovn\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.665942 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-scripts\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.701473 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jss8\" (UniqueName: \"kubernetes.io/projected/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-kube-api-access-9jss8\") pod \"ovn-controller-9xw22-config-kcsdj\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.749463 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.775553 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j7lp\" (UniqueName: \"kubernetes.io/projected/42d0d1a8-0eb3-4d1d-899a-e1441332e34d-kube-api-access-5j7lp\") pod \"placement-1415-account-create-svvgs\" (UID: \"42d0d1a8-0eb3-4d1d-899a-e1441332e34d\") " pod="openstack/placement-1415-account-create-svvgs" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.877470 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j7lp\" (UniqueName: \"kubernetes.io/projected/42d0d1a8-0eb3-4d1d-899a-e1441332e34d-kube-api-access-5j7lp\") pod \"placement-1415-account-create-svvgs\" (UID: \"42d0d1a8-0eb3-4d1d-899a-e1441332e34d\") " pod="openstack/placement-1415-account-create-svvgs" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.910549 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j7lp\" (UniqueName: \"kubernetes.io/projected/42d0d1a8-0eb3-4d1d-899a-e1441332e34d-kube-api-access-5j7lp\") pod \"placement-1415-account-create-svvgs\" (UID: \"42d0d1a8-0eb3-4d1d-899a-e1441332e34d\") " pod="openstack/placement-1415-account-create-svvgs" Dec 16 12:14:21 crc kubenswrapper[4805]: I1216 12:14:21.975816 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1415-account-create-svvgs" Dec 16 12:14:22 crc kubenswrapper[4805]: I1216 12:14:22.187062 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-546b-account-create-mx6tl"] Dec 16 12:14:22 crc kubenswrapper[4805]: W1216 12:14:22.197529 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod327fe989_8d12_4624_adb2_d2fe4680168c.slice/crio-5885a87e962cbc7fb957d91748a396f3e2c805a5442020b3b46c6aa18f9fc4ae WatchSource:0}: Error finding container 5885a87e962cbc7fb957d91748a396f3e2c805a5442020b3b46c6aa18f9fc4ae: Status 404 returned error can't find the container with id 5885a87e962cbc7fb957d91748a396f3e2c805a5442020b3b46c6aa18f9fc4ae Dec 16 12:14:22 crc kubenswrapper[4805]: I1216 12:14:22.389259 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9xw22-config-kcsdj"] Dec 16 12:14:22 crc kubenswrapper[4805]: W1216 12:14:22.391801 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5735ff9_707c_4d2c_a5f1_9097f714e3c7.slice/crio-7da78aafdbbc9b61b12275e1665b8c6ec3ff7d10cb6e2d67379e263180de7cbe WatchSource:0}: Error finding container 7da78aafdbbc9b61b12275e1665b8c6ec3ff7d10cb6e2d67379e263180de7cbe: Status 404 returned error can't find the container with id 7da78aafdbbc9b61b12275e1665b8c6ec3ff7d10cb6e2d67379e263180de7cbe Dec 16 12:14:22 crc kubenswrapper[4805]: I1216 12:14:22.562277 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1415-account-create-svvgs"] Dec 16 12:14:22 crc kubenswrapper[4805]: W1216 12:14:22.568594 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42d0d1a8_0eb3_4d1d_899a_e1441332e34d.slice/crio-d42950b39a60adf55d0152953eb66f2f83d1013b19484e906e476ec4e70d7e2f WatchSource:0}: Error finding container d42950b39a60adf55d0152953eb66f2f83d1013b19484e906e476ec4e70d7e2f: Status 404 returned error can't find the container with id d42950b39a60adf55d0152953eb66f2f83d1013b19484e906e476ec4e70d7e2f Dec 16 12:14:22 crc kubenswrapper[4805]: I1216 12:14:22.644080 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1415-account-create-svvgs" event={"ID":"42d0d1a8-0eb3-4d1d-899a-e1441332e34d","Type":"ContainerStarted","Data":"d42950b39a60adf55d0152953eb66f2f83d1013b19484e906e476ec4e70d7e2f"} Dec 16 12:14:22 crc kubenswrapper[4805]: I1216 12:14:22.645495 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9xw22-config-kcsdj" event={"ID":"d5735ff9-707c-4d2c-a5f1-9097f714e3c7","Type":"ContainerStarted","Data":"7da78aafdbbc9b61b12275e1665b8c6ec3ff7d10cb6e2d67379e263180de7cbe"} Dec 16 12:14:22 crc kubenswrapper[4805]: I1216 12:14:22.647934 4805 generic.go:334] "Generic (PLEG): container finished" podID="327fe989-8d12-4624-adb2-d2fe4680168c" containerID="dac27f968d0db7cb9302edf8b58ccad2a8f13e5c3ac2c34c3aa0082667093ed9" exitCode=0 Dec 16 12:14:22 crc kubenswrapper[4805]: I1216 12:14:22.647967 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-546b-account-create-mx6tl" event={"ID":"327fe989-8d12-4624-adb2-d2fe4680168c","Type":"ContainerDied","Data":"dac27f968d0db7cb9302edf8b58ccad2a8f13e5c3ac2c34c3aa0082667093ed9"} Dec 16 12:14:22 crc kubenswrapper[4805]: I1216 12:14:22.647984 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-546b-account-create-mx6tl" event={"ID":"327fe989-8d12-4624-adb2-d2fe4680168c","Type":"ContainerStarted","Data":"5885a87e962cbc7fb957d91748a396f3e2c805a5442020b3b46c6aa18f9fc4ae"} Dec 16 12:14:23 crc kubenswrapper[4805]: I1216 12:14:23.657102 4805 generic.go:334] "Generic (PLEG): container finished" podID="42d0d1a8-0eb3-4d1d-899a-e1441332e34d" containerID="cd4ba4b3ef99a38fca5cba1c2ed06a07f724adea8b4f38fddc801cf66d9dd4d5" exitCode=0 Dec 16 12:14:23 crc kubenswrapper[4805]: I1216 12:14:23.657185 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1415-account-create-svvgs" event={"ID":"42d0d1a8-0eb3-4d1d-899a-e1441332e34d","Type":"ContainerDied","Data":"cd4ba4b3ef99a38fca5cba1c2ed06a07f724adea8b4f38fddc801cf66d9dd4d5"} Dec 16 12:14:23 crc kubenswrapper[4805]: I1216 12:14:23.659267 4805 generic.go:334] "Generic (PLEG): container finished" podID="d5735ff9-707c-4d2c-a5f1-9097f714e3c7" containerID="97ab27c6d088b4e5e6c6ccd8de6369894c35f4d52926d691c0e08120147a05b9" exitCode=0 Dec 16 12:14:23 crc kubenswrapper[4805]: I1216 12:14:23.659332 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9xw22-config-kcsdj" event={"ID":"d5735ff9-707c-4d2c-a5f1-9097f714e3c7","Type":"ContainerDied","Data":"97ab27c6d088b4e5e6c6ccd8de6369894c35f4d52926d691c0e08120147a05b9"} Dec 16 12:14:23 crc kubenswrapper[4805]: I1216 12:14:23.993629 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-546b-account-create-mx6tl" Dec 16 12:14:24 crc kubenswrapper[4805]: I1216 12:14:24.119247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9tpw\" (UniqueName: \"kubernetes.io/projected/327fe989-8d12-4624-adb2-d2fe4680168c-kube-api-access-b9tpw\") pod \"327fe989-8d12-4624-adb2-d2fe4680168c\" (UID: \"327fe989-8d12-4624-adb2-d2fe4680168c\") " Dec 16 12:14:24 crc kubenswrapper[4805]: I1216 12:14:24.129336 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/327fe989-8d12-4624-adb2-d2fe4680168c-kube-api-access-b9tpw" (OuterVolumeSpecName: "kube-api-access-b9tpw") pod "327fe989-8d12-4624-adb2-d2fe4680168c" (UID: "327fe989-8d12-4624-adb2-d2fe4680168c"). InnerVolumeSpecName "kube-api-access-b9tpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:24 crc kubenswrapper[4805]: I1216 12:14:24.221868 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9tpw\" (UniqueName: \"kubernetes.io/projected/327fe989-8d12-4624-adb2-d2fe4680168c-kube-api-access-b9tpw\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:24 crc kubenswrapper[4805]: I1216 12:14:24.668891 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-546b-account-create-mx6tl" Dec 16 12:14:24 crc kubenswrapper[4805]: I1216 12:14:24.668983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-546b-account-create-mx6tl" event={"ID":"327fe989-8d12-4624-adb2-d2fe4680168c","Type":"ContainerDied","Data":"5885a87e962cbc7fb957d91748a396f3e2c805a5442020b3b46c6aa18f9fc4ae"} Dec 16 12:14:24 crc kubenswrapper[4805]: I1216 12:14:24.669054 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5885a87e962cbc7fb957d91748a396f3e2c805a5442020b3b46c6aa18f9fc4ae" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.080716 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.088101 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1415-account-create-svvgs" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.243228 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jss8\" (UniqueName: \"kubernetes.io/projected/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-kube-api-access-9jss8\") pod \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.243602 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-scripts\") pod \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.243735 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j7lp\" (UniqueName: \"kubernetes.io/projected/42d0d1a8-0eb3-4d1d-899a-e1441332e34d-kube-api-access-5j7lp\") pod \"42d0d1a8-0eb3-4d1d-899a-e1441332e34d\" (UID: \"42d0d1a8-0eb3-4d1d-899a-e1441332e34d\") " Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.243832 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run-ovn\") pod \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.243961 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-additional-scripts\") pod \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.244058 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-log-ovn\") pod \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.244178 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run\") pod \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\" (UID: \"d5735ff9-707c-4d2c-a5f1-9097f714e3c7\") " Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.244642 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run" (OuterVolumeSpecName: "var-run") pod "d5735ff9-707c-4d2c-a5f1-9097f714e3c7" (UID: "d5735ff9-707c-4d2c-a5f1-9097f714e3c7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.244663 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-scripts" (OuterVolumeSpecName: "scripts") pod "d5735ff9-707c-4d2c-a5f1-9097f714e3c7" (UID: "d5735ff9-707c-4d2c-a5f1-9097f714e3c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.244860 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d5735ff9-707c-4d2c-a5f1-9097f714e3c7" (UID: "d5735ff9-707c-4d2c-a5f1-9097f714e3c7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.244948 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d5735ff9-707c-4d2c-a5f1-9097f714e3c7" (UID: "d5735ff9-707c-4d2c-a5f1-9097f714e3c7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.244954 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d5735ff9-707c-4d2c-a5f1-9097f714e3c7" (UID: "d5735ff9-707c-4d2c-a5f1-9097f714e3c7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.249046 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d0d1a8-0eb3-4d1d-899a-e1441332e34d-kube-api-access-5j7lp" (OuterVolumeSpecName: "kube-api-access-5j7lp") pod "42d0d1a8-0eb3-4d1d-899a-e1441332e34d" (UID: "42d0d1a8-0eb3-4d1d-899a-e1441332e34d"). InnerVolumeSpecName "kube-api-access-5j7lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.251897 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-kube-api-access-9jss8" (OuterVolumeSpecName: "kube-api-access-9jss8") pod "d5735ff9-707c-4d2c-a5f1-9097f714e3c7" (UID: "d5735ff9-707c-4d2c-a5f1-9097f714e3c7"). InnerVolumeSpecName "kube-api-access-9jss8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.346526 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jss8\" (UniqueName: \"kubernetes.io/projected/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-kube-api-access-9jss8\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.346571 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.346581 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j7lp\" (UniqueName: \"kubernetes.io/projected/42d0d1a8-0eb3-4d1d-899a-e1441332e34d-kube-api-access-5j7lp\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.346589 4805 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.346600 4805 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.346608 4805 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.346617 4805 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d5735ff9-707c-4d2c-a5f1-9097f714e3c7-var-run\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.679570 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9xw22-config-kcsdj" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.679543 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9xw22-config-kcsdj" event={"ID":"d5735ff9-707c-4d2c-a5f1-9097f714e3c7","Type":"ContainerDied","Data":"7da78aafdbbc9b61b12275e1665b8c6ec3ff7d10cb6e2d67379e263180de7cbe"} Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.679686 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7da78aafdbbc9b61b12275e1665b8c6ec3ff7d10cb6e2d67379e263180de7cbe" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.681778 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1415-account-create-svvgs" event={"ID":"42d0d1a8-0eb3-4d1d-899a-e1441332e34d","Type":"ContainerDied","Data":"d42950b39a60adf55d0152953eb66f2f83d1013b19484e906e476ec4e70d7e2f"} Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.681808 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42950b39a60adf55d0152953eb66f2f83d1013b19484e906e476ec4e70d7e2f" Dec 16 12:14:25 crc kubenswrapper[4805]: I1216 12:14:25.681881 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1415-account-create-svvgs" Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.222281 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9xw22-config-kcsdj"] Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.236409 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9xw22-config-kcsdj"] Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.532015 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5735ff9-707c-4d2c-a5f1-9097f714e3c7" path="/var/lib/kubelet/pods/d5735ff9-707c-4d2c-a5f1-9097f714e3c7/volumes" Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.859605 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1042-account-create-mxznj"] Dec 16 12:14:26 crc kubenswrapper[4805]: E1216 12:14:26.860774 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d0d1a8-0eb3-4d1d-899a-e1441332e34d" containerName="mariadb-account-create" Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.860883 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d0d1a8-0eb3-4d1d-899a-e1441332e34d" containerName="mariadb-account-create" Dec 16 12:14:26 crc kubenswrapper[4805]: E1216 12:14:26.860985 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="327fe989-8d12-4624-adb2-d2fe4680168c" containerName="mariadb-account-create" Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.861069 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="327fe989-8d12-4624-adb2-d2fe4680168c" containerName="mariadb-account-create" Dec 16 12:14:26 crc kubenswrapper[4805]: E1216 12:14:26.861179 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5735ff9-707c-4d2c-a5f1-9097f714e3c7" containerName="ovn-config" Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.861270 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5735ff9-707c-4d2c-a5f1-9097f714e3c7" containerName="ovn-config" Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.861588 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5735ff9-707c-4d2c-a5f1-9097f714e3c7" containerName="ovn-config" Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.861704 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d0d1a8-0eb3-4d1d-899a-e1441332e34d" containerName="mariadb-account-create" Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.861782 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="327fe989-8d12-4624-adb2-d2fe4680168c" containerName="mariadb-account-create" Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.862538 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1042-account-create-mxznj" Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.864796 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.870610 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1042-account-create-mxznj"] Dec 16 12:14:26 crc kubenswrapper[4805]: I1216 12:14:26.971323 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6jfx\" (UniqueName: \"kubernetes.io/projected/49dfee39-2eca-4686-ab17-fb6e4ccaf226-kube-api-access-w6jfx\") pod \"glance-1042-account-create-mxznj\" (UID: \"49dfee39-2eca-4686-ab17-fb6e4ccaf226\") " pod="openstack/glance-1042-account-create-mxznj" Dec 16 12:14:27 crc kubenswrapper[4805]: I1216 12:14:27.071548 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:14:27 crc kubenswrapper[4805]: I1216 12:14:27.071605 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:14:27 crc kubenswrapper[4805]: I1216 12:14:27.072623 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6jfx\" (UniqueName: \"kubernetes.io/projected/49dfee39-2eca-4686-ab17-fb6e4ccaf226-kube-api-access-w6jfx\") pod \"glance-1042-account-create-mxznj\" (UID: \"49dfee39-2eca-4686-ab17-fb6e4ccaf226\") " pod="openstack/glance-1042-account-create-mxznj" Dec 16 12:14:27 crc kubenswrapper[4805]: I1216 12:14:27.090314 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6jfx\" (UniqueName: \"kubernetes.io/projected/49dfee39-2eca-4686-ab17-fb6e4ccaf226-kube-api-access-w6jfx\") pod \"glance-1042-account-create-mxznj\" (UID: \"49dfee39-2eca-4686-ab17-fb6e4ccaf226\") " pod="openstack/glance-1042-account-create-mxznj" Dec 16 12:14:27 crc kubenswrapper[4805]: I1216 12:14:27.220845 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1042-account-create-mxznj" Dec 16 12:14:27 crc kubenswrapper[4805]: I1216 12:14:27.729430 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1042-account-create-mxznj"] Dec 16 12:14:27 crc kubenswrapper[4805]: W1216 12:14:27.734420 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49dfee39_2eca_4686_ab17_fb6e4ccaf226.slice/crio-95f8c818f4d8e0852ecf6db8f92b5ecbcc9ddb890a5780c296d951f3aaca744c WatchSource:0}: Error finding container 95f8c818f4d8e0852ecf6db8f92b5ecbcc9ddb890a5780c296d951f3aaca744c: Status 404 returned error can't find the container with id 95f8c818f4d8e0852ecf6db8f92b5ecbcc9ddb890a5780c296d951f3aaca744c Dec 16 12:14:28 crc kubenswrapper[4805]: I1216 12:14:28.395745 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:14:28 crc kubenswrapper[4805]: I1216 12:14:28.414678 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ed9b27-9804-4584-a244-30ba1f033e17-etc-swift\") pod \"swift-storage-0\" (UID: \"c7ed9b27-9804-4584-a244-30ba1f033e17\") " pod="openstack/swift-storage-0" Dec 16 12:14:28 crc kubenswrapper[4805]: I1216 12:14:28.654861 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 16 12:14:28 crc kubenswrapper[4805]: I1216 12:14:28.694027 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:14:28 crc kubenswrapper[4805]: I1216 12:14:28.717643 4805 generic.go:334] "Generic (PLEG): container finished" podID="49dfee39-2eca-4686-ab17-fb6e4ccaf226" containerID="15a1ebed4365cad53c6a401f1c678d67e23548745afed1a5a25d6807546e3cf4" exitCode=0 Dec 16 12:14:28 crc kubenswrapper[4805]: I1216 12:14:28.717708 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1042-account-create-mxznj" event={"ID":"49dfee39-2eca-4686-ab17-fb6e4ccaf226","Type":"ContainerDied","Data":"15a1ebed4365cad53c6a401f1c678d67e23548745afed1a5a25d6807546e3cf4"} Dec 16 12:14:28 crc kubenswrapper[4805]: I1216 12:14:28.717742 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1042-account-create-mxznj" event={"ID":"49dfee39-2eca-4686-ab17-fb6e4ccaf226","Type":"ContainerStarted","Data":"95f8c818f4d8e0852ecf6db8f92b5ecbcc9ddb890a5780c296d951f3aaca744c"} Dec 16 12:14:29 crc kubenswrapper[4805]: I1216 12:14:29.354085 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 16 12:14:29 crc kubenswrapper[4805]: W1216 12:14:29.371417 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7ed9b27_9804_4584_a244_30ba1f033e17.slice/crio-dc2b8fcad8645590259b9288e33c30b44510654f9361dacb2980fafe3d67033f WatchSource:0}: Error finding container dc2b8fcad8645590259b9288e33c30b44510654f9361dacb2980fafe3d67033f: Status 404 returned error can't find the container with id dc2b8fcad8645590259b9288e33c30b44510654f9361dacb2980fafe3d67033f Dec 16 12:14:29 crc kubenswrapper[4805]: I1216 12:14:29.726499 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"dc2b8fcad8645590259b9288e33c30b44510654f9361dacb2980fafe3d67033f"} Dec 16 12:14:30 crc kubenswrapper[4805]: I1216 12:14:30.069848 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1042-account-create-mxznj" Dec 16 12:14:30 crc kubenswrapper[4805]: I1216 12:14:30.254549 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6jfx\" (UniqueName: \"kubernetes.io/projected/49dfee39-2eca-4686-ab17-fb6e4ccaf226-kube-api-access-w6jfx\") pod \"49dfee39-2eca-4686-ab17-fb6e4ccaf226\" (UID: \"49dfee39-2eca-4686-ab17-fb6e4ccaf226\") " Dec 16 12:14:30 crc kubenswrapper[4805]: I1216 12:14:30.268429 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49dfee39-2eca-4686-ab17-fb6e4ccaf226-kube-api-access-w6jfx" (OuterVolumeSpecName: "kube-api-access-w6jfx") pod "49dfee39-2eca-4686-ab17-fb6e4ccaf226" (UID: "49dfee39-2eca-4686-ab17-fb6e4ccaf226"). InnerVolumeSpecName "kube-api-access-w6jfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:30 crc kubenswrapper[4805]: I1216 12:14:30.358370 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6jfx\" (UniqueName: \"kubernetes.io/projected/49dfee39-2eca-4686-ab17-fb6e4ccaf226-kube-api-access-w6jfx\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:30 crc kubenswrapper[4805]: I1216 12:14:30.736114 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1042-account-create-mxznj" event={"ID":"49dfee39-2eca-4686-ab17-fb6e4ccaf226","Type":"ContainerDied","Data":"95f8c818f4d8e0852ecf6db8f92b5ecbcc9ddb890a5780c296d951f3aaca744c"} Dec 16 12:14:30 crc kubenswrapper[4805]: I1216 12:14:30.737348 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95f8c818f4d8e0852ecf6db8f92b5ecbcc9ddb890a5780c296d951f3aaca744c" Dec 16 12:14:30 crc kubenswrapper[4805]: I1216 12:14:30.736346 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1042-account-create-mxznj" Dec 16 12:14:30 crc kubenswrapper[4805]: I1216 12:14:30.984536 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9xw22" Dec 16 12:14:31 crc kubenswrapper[4805]: I1216 12:14:31.752570 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"bf952a00dfccb741a280848cbda1ea5d4fac85e4ab4dce84b27c9eaa0f4d6904"} Dec 16 12:14:31 crc kubenswrapper[4805]: I1216 12:14:31.752620 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"36010d9a50da4291f00bf5416f26f53a1fd518f7793560f3f0192a6517447d1c"} Dec 16 12:14:31 crc kubenswrapper[4805]: I1216 12:14:31.752630 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"fd6c91ec31e1be9d307f1e046a489da8d56c79b4f2834c8ece21d1b1181147c9"} Dec 16 12:14:31 crc kubenswrapper[4805]: I1216 12:14:31.937983 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-x5dc8"] Dec 16 12:14:31 crc kubenswrapper[4805]: E1216 12:14:31.938381 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49dfee39-2eca-4686-ab17-fb6e4ccaf226" containerName="mariadb-account-create" Dec 16 12:14:31 crc kubenswrapper[4805]: I1216 12:14:31.938400 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="49dfee39-2eca-4686-ab17-fb6e4ccaf226" containerName="mariadb-account-create" Dec 16 12:14:31 crc kubenswrapper[4805]: I1216 12:14:31.938560 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="49dfee39-2eca-4686-ab17-fb6e4ccaf226" containerName="mariadb-account-create" Dec 16 12:14:31 crc kubenswrapper[4805]: I1216 12:14:31.939193 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:31 crc kubenswrapper[4805]: I1216 12:14:31.948960 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 16 12:14:31 crc kubenswrapper[4805]: I1216 12:14:31.948959 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-r9rpt" Dec 16 12:14:31 crc kubenswrapper[4805]: I1216 12:14:31.984024 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-x5dc8"] Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.012653 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-db-sync-config-data\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.012743 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpcxj\" (UniqueName: \"kubernetes.io/projected/c5fec641-bdba-4f8a-b6bf-d13721a860d2-kube-api-access-dpcxj\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.012780 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-combined-ca-bundle\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.012843 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-config-data\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.114375 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-db-sync-config-data\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.114450 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpcxj\" (UniqueName: \"kubernetes.io/projected/c5fec641-bdba-4f8a-b6bf-d13721a860d2-kube-api-access-dpcxj\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.114476 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-combined-ca-bundle\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.114519 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-config-data\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.119773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-config-data\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.119925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-combined-ca-bundle\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.126578 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-db-sync-config-data\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.135826 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpcxj\" (UniqueName: \"kubernetes.io/projected/c5fec641-bdba-4f8a-b6bf-d13721a860d2-kube-api-access-dpcxj\") pod \"glance-db-sync-x5dc8\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.263120 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x5dc8" Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.768230 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"8aa565b000b12fa380171ce0da783a1b6d1465c446ec1199ed9396366a2e9473"} Dec 16 12:14:32 crc kubenswrapper[4805]: I1216 12:14:32.840355 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-x5dc8"] Dec 16 12:14:33 crc kubenswrapper[4805]: I1216 12:14:33.822781 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"bfd756927524c8bd635959088fe9f12c31e4b8d48d994ecebb5115065baf7440"} Dec 16 12:14:33 crc kubenswrapper[4805]: I1216 12:14:33.841710 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x5dc8" event={"ID":"c5fec641-bdba-4f8a-b6bf-d13721a860d2","Type":"ContainerStarted","Data":"8d7c06f0bcbf350f5b858566f0e9b03a92ea2dc1274e7738864206943385faf9"} Dec 16 12:14:34 crc kubenswrapper[4805]: I1216 12:14:34.856238 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"0f41bba7a0318ecd86e0a6b5f768b864ff76a332681630beb35b719ba93874f8"} Dec 16 12:14:35 crc kubenswrapper[4805]: I1216 12:14:35.867075 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"6387389248ba6a862656678b562c23a29fa1336c5a46e2ac5175ded97c255cff"} Dec 16 12:14:38 crc kubenswrapper[4805]: I1216 12:14:38.305711 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 16 12:14:38 crc kubenswrapper[4805]: I1216 12:14:38.865010 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-k5str"] Dec 16 12:14:38 crc kubenswrapper[4805]: I1216 12:14:38.867514 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5str" Dec 16 12:14:38 crc kubenswrapper[4805]: I1216 12:14:38.897124 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k5str"] Dec 16 12:14:38 crc kubenswrapper[4805]: I1216 12:14:38.955541 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"f04241e883104c894c54c61c414532b7e3f60c86f2cf49b8c0a6e71f562ea04b"} Dec 16 12:14:38 crc kubenswrapper[4805]: I1216 12:14:38.985717 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-tdldw"] Dec 16 12:14:38 crc kubenswrapper[4805]: I1216 12:14:38.990129 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdldw" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.015086 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tdldw"] Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.051982 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zzl\" (UniqueName: \"kubernetes.io/projected/2013484d-d204-43f8-a923-7ec541851696-kube-api-access-s4zzl\") pod \"barbican-db-create-tdldw\" (UID: \"2013484d-d204-43f8-a923-7ec541851696\") " pod="openstack/barbican-db-create-tdldw" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.052331 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6sjg\" (UniqueName: \"kubernetes.io/projected/a7de0415-ad51-47a8-965e-fe16b2d9af9c-kube-api-access-c6sjg\") pod \"cinder-db-create-k5str\" (UID: \"a7de0415-ad51-47a8-965e-fe16b2d9af9c\") " pod="openstack/cinder-db-create-k5str" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.103373 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-shl6n"] Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.104832 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-shl6n" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.134664 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-shl6n"] Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.155137 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6sjg\" (UniqueName: \"kubernetes.io/projected/a7de0415-ad51-47a8-965e-fe16b2d9af9c-kube-api-access-c6sjg\") pod \"cinder-db-create-k5str\" (UID: \"a7de0415-ad51-47a8-965e-fe16b2d9af9c\") " pod="openstack/cinder-db-create-k5str" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.155384 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zzl\" (UniqueName: \"kubernetes.io/projected/2013484d-d204-43f8-a923-7ec541851696-kube-api-access-s4zzl\") pod \"barbican-db-create-tdldw\" (UID: \"2013484d-d204-43f8-a923-7ec541851696\") " pod="openstack/barbican-db-create-tdldw" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.201266 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zzl\" (UniqueName: \"kubernetes.io/projected/2013484d-d204-43f8-a923-7ec541851696-kube-api-access-s4zzl\") pod \"barbican-db-create-tdldw\" (UID: \"2013484d-d204-43f8-a923-7ec541851696\") " pod="openstack/barbican-db-create-tdldw" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.212708 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6sjg\" (UniqueName: \"kubernetes.io/projected/a7de0415-ad51-47a8-965e-fe16b2d9af9c-kube-api-access-c6sjg\") pod \"cinder-db-create-k5str\" (UID: \"a7de0415-ad51-47a8-965e-fe16b2d9af9c\") " pod="openstack/cinder-db-create-k5str" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.242267 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qckpn"] Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.243621 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qckpn" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.247258 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.248424 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.248608 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.248799 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-d6pz6" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.256886 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68m9v\" (UniqueName: \"kubernetes.io/projected/3fd974ee-1177-4f7e-8c86-63aec6f9c86e-kube-api-access-68m9v\") pod \"neutron-db-create-shl6n\" (UID: \"3fd974ee-1177-4f7e-8c86-63aec6f9c86e\") " pod="openstack/neutron-db-create-shl6n" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.277978 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qckpn"] Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.336896 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdldw" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.361545 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-config-data\") pod \"keystone-db-sync-qckpn\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " pod="openstack/keystone-db-sync-qckpn" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.362072 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68m9v\" (UniqueName: \"kubernetes.io/projected/3fd974ee-1177-4f7e-8c86-63aec6f9c86e-kube-api-access-68m9v\") pod \"neutron-db-create-shl6n\" (UID: \"3fd974ee-1177-4f7e-8c86-63aec6f9c86e\") " pod="openstack/neutron-db-create-shl6n" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.362233 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-combined-ca-bundle\") pod \"keystone-db-sync-qckpn\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " pod="openstack/keystone-db-sync-qckpn" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.362426 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpmrq\" (UniqueName: \"kubernetes.io/projected/b60b01ba-b205-4451-8a28-33dfb9845ff2-kube-api-access-cpmrq\") pod \"keystone-db-sync-qckpn\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " pod="openstack/keystone-db-sync-qckpn" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.391874 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68m9v\" (UniqueName: \"kubernetes.io/projected/3fd974ee-1177-4f7e-8c86-63aec6f9c86e-kube-api-access-68m9v\") pod \"neutron-db-create-shl6n\" (UID: \"3fd974ee-1177-4f7e-8c86-63aec6f9c86e\") " pod="openstack/neutron-db-create-shl6n" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.430012 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-shl6n" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.463368 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpmrq\" (UniqueName: \"kubernetes.io/projected/b60b01ba-b205-4451-8a28-33dfb9845ff2-kube-api-access-cpmrq\") pod \"keystone-db-sync-qckpn\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " pod="openstack/keystone-db-sync-qckpn" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.463481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-config-data\") pod \"keystone-db-sync-qckpn\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " pod="openstack/keystone-db-sync-qckpn" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.463579 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-combined-ca-bundle\") pod \"keystone-db-sync-qckpn\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " pod="openstack/keystone-db-sync-qckpn" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.466942 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-combined-ca-bundle\") pod \"keystone-db-sync-qckpn\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " pod="openstack/keystone-db-sync-qckpn" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.473771 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-config-data\") pod \"keystone-db-sync-qckpn\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " pod="openstack/keystone-db-sync-qckpn" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.489666 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpmrq\" (UniqueName: \"kubernetes.io/projected/b60b01ba-b205-4451-8a28-33dfb9845ff2-kube-api-access-cpmrq\") pod \"keystone-db-sync-qckpn\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " pod="openstack/keystone-db-sync-qckpn" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.491192 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5str" Dec 16 12:14:39 crc kubenswrapper[4805]: I1216 12:14:39.615056 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qckpn" Dec 16 12:14:47 crc kubenswrapper[4805]: I1216 12:14:47.235770 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tdldw"] Dec 16 12:14:47 crc kubenswrapper[4805]: W1216 12:14:47.246006 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2013484d_d204_43f8_a923_7ec541851696.slice/crio-caa5b0e9564a6b3875c060f1577846ae95e2090ae6dc9085ec7fa419fc6182fa WatchSource:0}: Error finding container caa5b0e9564a6b3875c060f1577846ae95e2090ae6dc9085ec7fa419fc6182fa: Status 404 returned error can't find the container with id caa5b0e9564a6b3875c060f1577846ae95e2090ae6dc9085ec7fa419fc6182fa Dec 16 12:14:47 crc kubenswrapper[4805]: I1216 12:14:47.249426 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qckpn"] Dec 16 12:14:47 crc kubenswrapper[4805]: W1216 12:14:47.253785 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb60b01ba_b205_4451_8a28_33dfb9845ff2.slice/crio-6abf53e001595359dd7eb757ee3362df4a09e366026e72b1d0ae12b12944e3ea WatchSource:0}: Error finding container 6abf53e001595359dd7eb757ee3362df4a09e366026e72b1d0ae12b12944e3ea: Status 404 returned error can't find the container with id 6abf53e001595359dd7eb757ee3362df4a09e366026e72b1d0ae12b12944e3ea Dec 16 12:14:47 crc kubenswrapper[4805]: I1216 12:14:47.325827 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-shl6n"] Dec 16 12:14:47 crc kubenswrapper[4805]: W1216 12:14:47.333180 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7de0415_ad51_47a8_965e_fe16b2d9af9c.slice/crio-d2e5291bfc1fa1750ecb259f77dcc39c3ae326acddaf54a1400fa7697d8d27ab WatchSource:0}: Error finding container d2e5291bfc1fa1750ecb259f77dcc39c3ae326acddaf54a1400fa7697d8d27ab: Status 404 returned error can't find the container with id d2e5291bfc1fa1750ecb259f77dcc39c3ae326acddaf54a1400fa7697d8d27ab Dec 16 12:14:47 crc kubenswrapper[4805]: I1216 12:14:47.338625 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k5str"] Dec 16 12:14:47 crc kubenswrapper[4805]: W1216 12:14:47.357063 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd974ee_1177_4f7e_8c86_63aec6f9c86e.slice/crio-409b3c6fee54fff5b9f347ae8765070ccb4a449728b691a8b6f7394aeb58d69a WatchSource:0}: Error finding container 409b3c6fee54fff5b9f347ae8765070ccb4a449728b691a8b6f7394aeb58d69a: Status 404 returned error can't find the container with id 409b3c6fee54fff5b9f347ae8765070ccb4a449728b691a8b6f7394aeb58d69a Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.032267 4805 generic.go:334] "Generic (PLEG): container finished" podID="3fd974ee-1177-4f7e-8c86-63aec6f9c86e" containerID="68782eb1bd58b57a8b3a20e2f973f63e4b28e322b87bb3e215451f59b631e0c2" exitCode=0 Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.032568 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-shl6n" event={"ID":"3fd974ee-1177-4f7e-8c86-63aec6f9c86e","Type":"ContainerDied","Data":"68782eb1bd58b57a8b3a20e2f973f63e4b28e322b87bb3e215451f59b631e0c2"} Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.032593 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-shl6n" event={"ID":"3fd974ee-1177-4f7e-8c86-63aec6f9c86e","Type":"ContainerStarted","Data":"409b3c6fee54fff5b9f347ae8765070ccb4a449728b691a8b6f7394aeb58d69a"} Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.033978 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qckpn" event={"ID":"b60b01ba-b205-4451-8a28-33dfb9845ff2","Type":"ContainerStarted","Data":"6abf53e001595359dd7eb757ee3362df4a09e366026e72b1d0ae12b12944e3ea"} Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.035664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x5dc8" event={"ID":"c5fec641-bdba-4f8a-b6bf-d13721a860d2","Type":"ContainerStarted","Data":"8236a7a863a73ca414f133a79af5c6335a5ab2bfefb38c3305d212f6c1009298"} Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.039709 4805 generic.go:334] "Generic (PLEG): container finished" podID="2013484d-d204-43f8-a923-7ec541851696" containerID="9dff9f6b161586603998e6cdb1c926d338bcdf68fd6912ae7406c084b60a6f42" exitCode=0 Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.039819 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdldw" event={"ID":"2013484d-d204-43f8-a923-7ec541851696","Type":"ContainerDied","Data":"9dff9f6b161586603998e6cdb1c926d338bcdf68fd6912ae7406c084b60a6f42"} Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.039839 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdldw" event={"ID":"2013484d-d204-43f8-a923-7ec541851696","Type":"ContainerStarted","Data":"caa5b0e9564a6b3875c060f1577846ae95e2090ae6dc9085ec7fa419fc6182fa"} Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.041192 4805 generic.go:334] "Generic (PLEG): container finished" podID="a7de0415-ad51-47a8-965e-fe16b2d9af9c" containerID="736c3119475f389e96206fca8225715dc5a2bc2dd0fa72d966188bd243fc3bf9" exitCode=0 Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.041255 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k5str" event={"ID":"a7de0415-ad51-47a8-965e-fe16b2d9af9c","Type":"ContainerDied","Data":"736c3119475f389e96206fca8225715dc5a2bc2dd0fa72d966188bd243fc3bf9"} Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.041278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k5str" event={"ID":"a7de0415-ad51-47a8-965e-fe16b2d9af9c","Type":"ContainerStarted","Data":"d2e5291bfc1fa1750ecb259f77dcc39c3ae326acddaf54a1400fa7697d8d27ab"} Dec 16 12:14:48 crc kubenswrapper[4805]: I1216 12:14:48.114371 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-x5dc8" podStartSLOduration=2.996129425 podStartE2EDuration="17.114348887s" podCreationTimestamp="2025-12-16 12:14:31 +0000 UTC" firstStartedPulling="2025-12-16 12:14:32.855297121 +0000 UTC m=+1146.573554936" lastFinishedPulling="2025-12-16 12:14:46.973516593 +0000 UTC m=+1160.691774398" observedRunningTime="2025-12-16 12:14:48.113630696 +0000 UTC m=+1161.831888501" watchObservedRunningTime="2025-12-16 12:14:48.114348887 +0000 UTC m=+1161.832606712" Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.066263 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"c26c23b387a6a6bf23eaa889ee6ef9369b54ecb3ae9f078c0bacd263c7c516cf"} Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.066678 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"764f081525c803fdab12598f1abb1a119ed703d57e3302c018978cc8bd993256"} Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.066695 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"901e248fa985015a259398d644827909942037f95056a1e9a232ad75358bf72a"} Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.066707 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"cc6d348f6f74d1b0da7ee31e29f85101a54afd2d8ec7b752b2d2a44d06fb292a"} Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.607980 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-shl6n" Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.637225 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdldw" Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.677252 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5str" Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.772686 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4zzl\" (UniqueName: \"kubernetes.io/projected/2013484d-d204-43f8-a923-7ec541851696-kube-api-access-s4zzl\") pod \"2013484d-d204-43f8-a923-7ec541851696\" (UID: \"2013484d-d204-43f8-a923-7ec541851696\") " Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.772866 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6sjg\" (UniqueName: \"kubernetes.io/projected/a7de0415-ad51-47a8-965e-fe16b2d9af9c-kube-api-access-c6sjg\") pod \"a7de0415-ad51-47a8-965e-fe16b2d9af9c\" (UID: \"a7de0415-ad51-47a8-965e-fe16b2d9af9c\") " Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.772940 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68m9v\" (UniqueName: \"kubernetes.io/projected/3fd974ee-1177-4f7e-8c86-63aec6f9c86e-kube-api-access-68m9v\") pod \"3fd974ee-1177-4f7e-8c86-63aec6f9c86e\" (UID: \"3fd974ee-1177-4f7e-8c86-63aec6f9c86e\") " Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.779868 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2013484d-d204-43f8-a923-7ec541851696-kube-api-access-s4zzl" (OuterVolumeSpecName: "kube-api-access-s4zzl") pod "2013484d-d204-43f8-a923-7ec541851696" (UID: "2013484d-d204-43f8-a923-7ec541851696"). InnerVolumeSpecName "kube-api-access-s4zzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.781202 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd974ee-1177-4f7e-8c86-63aec6f9c86e-kube-api-access-68m9v" (OuterVolumeSpecName: "kube-api-access-68m9v") pod "3fd974ee-1177-4f7e-8c86-63aec6f9c86e" (UID: "3fd974ee-1177-4f7e-8c86-63aec6f9c86e"). InnerVolumeSpecName "kube-api-access-68m9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.781964 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7de0415-ad51-47a8-965e-fe16b2d9af9c-kube-api-access-c6sjg" (OuterVolumeSpecName: "kube-api-access-c6sjg") pod "a7de0415-ad51-47a8-965e-fe16b2d9af9c" (UID: "a7de0415-ad51-47a8-965e-fe16b2d9af9c"). InnerVolumeSpecName "kube-api-access-c6sjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.874581 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6sjg\" (UniqueName: \"kubernetes.io/projected/a7de0415-ad51-47a8-965e-fe16b2d9af9c-kube-api-access-c6sjg\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.874615 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68m9v\" (UniqueName: \"kubernetes.io/projected/3fd974ee-1177-4f7e-8c86-63aec6f9c86e-kube-api-access-68m9v\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:49 crc kubenswrapper[4805]: I1216 12:14:49.874624 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4zzl\" (UniqueName: \"kubernetes.io/projected/2013484d-d204-43f8-a923-7ec541851696-kube-api-access-s4zzl\") on node \"crc\" DevicePath \"\"" Dec 16 12:14:50 crc kubenswrapper[4805]: I1216 12:14:50.077549 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k5str" event={"ID":"a7de0415-ad51-47a8-965e-fe16b2d9af9c","Type":"ContainerDied","Data":"d2e5291bfc1fa1750ecb259f77dcc39c3ae326acddaf54a1400fa7697d8d27ab"} Dec 16 12:14:50 crc kubenswrapper[4805]: I1216 12:14:50.077876 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2e5291bfc1fa1750ecb259f77dcc39c3ae326acddaf54a1400fa7697d8d27ab" Dec 16 12:14:50 crc kubenswrapper[4805]: I1216 12:14:50.078234 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5str" Dec 16 12:14:50 crc kubenswrapper[4805]: I1216 12:14:50.079557 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-shl6n" event={"ID":"3fd974ee-1177-4f7e-8c86-63aec6f9c86e","Type":"ContainerDied","Data":"409b3c6fee54fff5b9f347ae8765070ccb4a449728b691a8b6f7394aeb58d69a"} Dec 16 12:14:50 crc kubenswrapper[4805]: I1216 12:14:50.079580 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="409b3c6fee54fff5b9f347ae8765070ccb4a449728b691a8b6f7394aeb58d69a" Dec 16 12:14:50 crc kubenswrapper[4805]: I1216 12:14:50.079634 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-shl6n" Dec 16 12:14:50 crc kubenswrapper[4805]: I1216 12:14:50.109357 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"f1e68220c93e0cb78c8fda80ed9f9ada1997f8943a4b27c3a934ac1a66a78f72"} Dec 16 12:14:50 crc kubenswrapper[4805]: I1216 12:14:50.109402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"29d7bae7569043c6989e8080402d0cef03e85caea4fc88aaf7de2fe290688116"} Dec 16 12:14:50 crc kubenswrapper[4805]: I1216 12:14:50.111552 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdldw" event={"ID":"2013484d-d204-43f8-a923-7ec541851696","Type":"ContainerDied","Data":"caa5b0e9564a6b3875c060f1577846ae95e2090ae6dc9085ec7fa419fc6182fa"} Dec 16 12:14:50 crc kubenswrapper[4805]: I1216 12:14:50.111580 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caa5b0e9564a6b3875c060f1577846ae95e2090ae6dc9085ec7fa419fc6182fa" Dec 16 12:14:50 crc kubenswrapper[4805]: I1216 12:14:50.111642 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdldw" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.133719 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c7ed9b27-9804-4584-a244-30ba1f033e17","Type":"ContainerStarted","Data":"69a5b46362af7cf2ab0884561fe7ac5fa5bf018f02f4fbcba2e589b5785c2d3d"} Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.171378 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.462981892 podStartE2EDuration="57.17110754s" podCreationTimestamp="2025-12-16 12:13:54 +0000 UTC" firstStartedPulling="2025-12-16 12:14:29.374302071 +0000 UTC m=+1143.092559876" lastFinishedPulling="2025-12-16 12:14:48.082427729 +0000 UTC m=+1161.800685524" observedRunningTime="2025-12-16 12:14:51.164309695 +0000 UTC m=+1164.882567520" watchObservedRunningTime="2025-12-16 12:14:51.17110754 +0000 UTC m=+1164.889365355" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.540698 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-zbc97"] Dec 16 12:14:51 crc kubenswrapper[4805]: E1216 12:14:51.541098 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7de0415-ad51-47a8-965e-fe16b2d9af9c" containerName="mariadb-database-create" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.541123 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7de0415-ad51-47a8-965e-fe16b2d9af9c" containerName="mariadb-database-create" Dec 16 12:14:51 crc kubenswrapper[4805]: E1216 12:14:51.541178 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd974ee-1177-4f7e-8c86-63aec6f9c86e" containerName="mariadb-database-create" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.541188 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd974ee-1177-4f7e-8c86-63aec6f9c86e" containerName="mariadb-database-create" Dec 16 12:14:51 crc kubenswrapper[4805]: E1216 12:14:51.541210 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2013484d-d204-43f8-a923-7ec541851696" containerName="mariadb-database-create" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.541220 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2013484d-d204-43f8-a923-7ec541851696" containerName="mariadb-database-create" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.541429 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7de0415-ad51-47a8-965e-fe16b2d9af9c" containerName="mariadb-database-create" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.541469 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd974ee-1177-4f7e-8c86-63aec6f9c86e" containerName="mariadb-database-create" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.541492 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2013484d-d204-43f8-a923-7ec541851696" containerName="mariadb-database-create" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.542627 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.545188 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.565853 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-zbc97"] Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.727598 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpp46\" (UniqueName: \"kubernetes.io/projected/a28faa94-62ae-49fa-8f01-8dff4989044e-kube-api-access-kpp46\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.727663 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.727684 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.727702 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.728057 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.728084 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-config\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.833109 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.833272 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.833303 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.833497 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.833525 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-config\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.833583 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpp46\" (UniqueName: \"kubernetes.io/projected/a28faa94-62ae-49fa-8f01-8dff4989044e-kube-api-access-kpp46\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.834219 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.834606 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.834656 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.834763 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-config\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.835826 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.854496 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpp46\" (UniqueName: \"kubernetes.io/projected/a28faa94-62ae-49fa-8f01-8dff4989044e-kube-api-access-kpp46\") pod \"dnsmasq-dns-77585f5f8c-zbc97\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:51 crc kubenswrapper[4805]: I1216 12:14:51.867740 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:14:56 crc kubenswrapper[4805]: I1216 12:14:56.564460 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-zbc97"] Dec 16 12:14:56 crc kubenswrapper[4805]: W1216 12:14:56.571502 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda28faa94_62ae_49fa_8f01_8dff4989044e.slice/crio-2f16371396ee923d681cc6df9802c47effb61ec9f24d9c724b679eac2e967e06 WatchSource:0}: Error finding container 2f16371396ee923d681cc6df9802c47effb61ec9f24d9c724b679eac2e967e06: Status 404 returned error can't find the container with id 2f16371396ee923d681cc6df9802c47effb61ec9f24d9c724b679eac2e967e06 Dec 16 12:14:57 crc kubenswrapper[4805]: I1216 12:14:57.072265 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:14:57 crc kubenswrapper[4805]: I1216 12:14:57.072686 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:14:57 crc kubenswrapper[4805]: I1216 12:14:57.212156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" event={"ID":"a28faa94-62ae-49fa-8f01-8dff4989044e","Type":"ContainerStarted","Data":"2f16371396ee923d681cc6df9802c47effb61ec9f24d9c724b679eac2e967e06"} Dec 16 12:14:58 crc kubenswrapper[4805]: I1216 12:14:58.839134 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-514d-account-create-2rk67"] Dec 16 12:14:58 crc kubenswrapper[4805]: I1216 12:14:58.840789 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-514d-account-create-2rk67" Dec 16 12:14:58 crc kubenswrapper[4805]: I1216 12:14:58.845557 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 16 12:14:58 crc kubenswrapper[4805]: I1216 12:14:58.857654 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-514d-account-create-2rk67"] Dec 16 12:14:58 crc kubenswrapper[4805]: I1216 12:14:58.973702 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zqkx\" (UniqueName: \"kubernetes.io/projected/66f57d62-e317-4e4b-ac87-b56d6ece0564-kube-api-access-7zqkx\") pod \"cinder-514d-account-create-2rk67\" (UID: \"66f57d62-e317-4e4b-ac87-b56d6ece0564\") " pod="openstack/cinder-514d-account-create-2rk67" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.046384 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3d76-account-create-npbtw"] Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.047397 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3d76-account-create-npbtw" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.049872 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.058120 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3d76-account-create-npbtw"] Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.080730 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zqkx\" (UniqueName: \"kubernetes.io/projected/66f57d62-e317-4e4b-ac87-b56d6ece0564-kube-api-access-7zqkx\") pod \"cinder-514d-account-create-2rk67\" (UID: \"66f57d62-e317-4e4b-ac87-b56d6ece0564\") " pod="openstack/cinder-514d-account-create-2rk67" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.109094 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zqkx\" (UniqueName: \"kubernetes.io/projected/66f57d62-e317-4e4b-ac87-b56d6ece0564-kube-api-access-7zqkx\") pod \"cinder-514d-account-create-2rk67\" (UID: \"66f57d62-e317-4e4b-ac87-b56d6ece0564\") " pod="openstack/cinder-514d-account-create-2rk67" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.161981 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-514d-account-create-2rk67" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.181984 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnplb\" (UniqueName: \"kubernetes.io/projected/47d14a2a-df4a-45be-b675-9a69ed1e8d45-kube-api-access-fnplb\") pod \"barbican-3d76-account-create-npbtw\" (UID: \"47d14a2a-df4a-45be-b675-9a69ed1e8d45\") " pod="openstack/barbican-3d76-account-create-npbtw" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.232281 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" event={"ID":"a28faa94-62ae-49fa-8f01-8dff4989044e","Type":"ContainerStarted","Data":"e7fdf8ca296f45dffdc22499e1d25e253d30eb502ca17bedc3e17d3b7f8a1dfa"} Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.245489 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c9e7-account-create-5fpb9"] Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.246909 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c9e7-account-create-5fpb9" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.249813 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.254261 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c9e7-account-create-5fpb9"] Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.284618 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnplb\" (UniqueName: \"kubernetes.io/projected/47d14a2a-df4a-45be-b675-9a69ed1e8d45-kube-api-access-fnplb\") pod \"barbican-3d76-account-create-npbtw\" (UID: \"47d14a2a-df4a-45be-b675-9a69ed1e8d45\") " pod="openstack/barbican-3d76-account-create-npbtw" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.310591 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnplb\" (UniqueName: \"kubernetes.io/projected/47d14a2a-df4a-45be-b675-9a69ed1e8d45-kube-api-access-fnplb\") pod \"barbican-3d76-account-create-npbtw\" (UID: \"47d14a2a-df4a-45be-b675-9a69ed1e8d45\") " pod="openstack/barbican-3d76-account-create-npbtw" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.368605 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3d76-account-create-npbtw" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.386436 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxqqk\" (UniqueName: \"kubernetes.io/projected/a3a83717-7d8b-4175-82db-106b634368b0-kube-api-access-mxqqk\") pod \"neutron-c9e7-account-create-5fpb9\" (UID: \"a3a83717-7d8b-4175-82db-106b634368b0\") " pod="openstack/neutron-c9e7-account-create-5fpb9" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.488603 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxqqk\" (UniqueName: \"kubernetes.io/projected/a3a83717-7d8b-4175-82db-106b634368b0-kube-api-access-mxqqk\") pod \"neutron-c9e7-account-create-5fpb9\" (UID: \"a3a83717-7d8b-4175-82db-106b634368b0\") " pod="openstack/neutron-c9e7-account-create-5fpb9" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.508122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxqqk\" (UniqueName: \"kubernetes.io/projected/a3a83717-7d8b-4175-82db-106b634368b0-kube-api-access-mxqqk\") pod \"neutron-c9e7-account-create-5fpb9\" (UID: \"a3a83717-7d8b-4175-82db-106b634368b0\") " pod="openstack/neutron-c9e7-account-create-5fpb9" Dec 16 12:14:59 crc kubenswrapper[4805]: I1216 12:14:59.572168 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c9e7-account-create-5fpb9" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.149673 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx"] Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.151374 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.153451 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.153651 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.165490 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx"] Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.249463 4805 generic.go:334] "Generic (PLEG): container finished" podID="a28faa94-62ae-49fa-8f01-8dff4989044e" containerID="e7fdf8ca296f45dffdc22499e1d25e253d30eb502ca17bedc3e17d3b7f8a1dfa" exitCode=0 Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.249545 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" event={"ID":"a28faa94-62ae-49fa-8f01-8dff4989044e","Type":"ContainerDied","Data":"e7fdf8ca296f45dffdc22499e1d25e253d30eb502ca17bedc3e17d3b7f8a1dfa"} Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.255631 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c9e7-account-create-5fpb9"] Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.255750 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qckpn" event={"ID":"b60b01ba-b205-4451-8a28-33dfb9845ff2","Type":"ContainerStarted","Data":"049de569c4c845d1fb82d31f4b08259fa6f699761843bf03f197d811f621389c"} Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.313066 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d960318b-f654-493a-bc0d-48760f738455-secret-volume\") pod \"collect-profiles-29431455-lcnfx\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.313151 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d960318b-f654-493a-bc0d-48760f738455-config-volume\") pod \"collect-profiles-29431455-lcnfx\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.313281 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmpk9\" (UniqueName: \"kubernetes.io/projected/d960318b-f654-493a-bc0d-48760f738455-kube-api-access-rmpk9\") pod \"collect-profiles-29431455-lcnfx\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.348569 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qckpn" podStartSLOduration=8.823850566 podStartE2EDuration="21.348544837s" podCreationTimestamp="2025-12-16 12:14:39 +0000 UTC" firstStartedPulling="2025-12-16 12:14:47.260065643 +0000 UTC m=+1160.978323448" lastFinishedPulling="2025-12-16 12:14:59.784759914 +0000 UTC m=+1173.503017719" observedRunningTime="2025-12-16 12:15:00.329380517 +0000 UTC m=+1174.047638342" watchObservedRunningTime="2025-12-16 12:15:00.348544837 +0000 UTC m=+1174.066802662" Dec 16 12:15:00 crc kubenswrapper[4805]: W1216 12:15:00.375723 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47d14a2a_df4a_45be_b675_9a69ed1e8d45.slice/crio-37d893a34f9566ceb466d9c40266b561fe14cfa1920aaa178ed34cae44d10502 WatchSource:0}: Error finding container 37d893a34f9566ceb466d9c40266b561fe14cfa1920aaa178ed34cae44d10502: Status 404 returned error can't find the container with id 37d893a34f9566ceb466d9c40266b561fe14cfa1920aaa178ed34cae44d10502 Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.376423 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-514d-account-create-2rk67"] Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.395849 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3d76-account-create-npbtw"] Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.418312 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmpk9\" (UniqueName: \"kubernetes.io/projected/d960318b-f654-493a-bc0d-48760f738455-kube-api-access-rmpk9\") pod \"collect-profiles-29431455-lcnfx\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.418396 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d960318b-f654-493a-bc0d-48760f738455-secret-volume\") pod \"collect-profiles-29431455-lcnfx\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.418438 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d960318b-f654-493a-bc0d-48760f738455-config-volume\") pod \"collect-profiles-29431455-lcnfx\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.428006 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d960318b-f654-493a-bc0d-48760f738455-config-volume\") pod \"collect-profiles-29431455-lcnfx\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.428894 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d960318b-f654-493a-bc0d-48760f738455-secret-volume\") pod \"collect-profiles-29431455-lcnfx\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.454164 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmpk9\" (UniqueName: \"kubernetes.io/projected/d960318b-f654-493a-bc0d-48760f738455-kube-api-access-rmpk9\") pod \"collect-profiles-29431455-lcnfx\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.479833 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:00 crc kubenswrapper[4805]: I1216 12:15:00.993766 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx"] Dec 16 12:15:01 crc kubenswrapper[4805]: W1216 12:15:01.002876 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd960318b_f654_493a_bc0d_48760f738455.slice/crio-42b7159eebad831ba86b7d2bee24c73e46cbc9ba5a19510d7e7b2575a7f5c1b2 WatchSource:0}: Error finding container 42b7159eebad831ba86b7d2bee24c73e46cbc9ba5a19510d7e7b2575a7f5c1b2: Status 404 returned error can't find the container with id 42b7159eebad831ba86b7d2bee24c73e46cbc9ba5a19510d7e7b2575a7f5c1b2 Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.276478 4805 generic.go:334] "Generic (PLEG): container finished" podID="47d14a2a-df4a-45be-b675-9a69ed1e8d45" containerID="bd53f6966de5580751d1978ea6bec3d5767d6fab88d894e29519f2f913a794f8" exitCode=0 Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.276553 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3d76-account-create-npbtw" event={"ID":"47d14a2a-df4a-45be-b675-9a69ed1e8d45","Type":"ContainerDied","Data":"bd53f6966de5580751d1978ea6bec3d5767d6fab88d894e29519f2f913a794f8"} Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.276604 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3d76-account-create-npbtw" event={"ID":"47d14a2a-df4a-45be-b675-9a69ed1e8d45","Type":"ContainerStarted","Data":"37d893a34f9566ceb466d9c40266b561fe14cfa1920aaa178ed34cae44d10502"} Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.278813 4805 generic.go:334] "Generic (PLEG): container finished" podID="a3a83717-7d8b-4175-82db-106b634368b0" containerID="56c3d96d5c34655ac8e4ef1fafa79029a2d8de902b0fbe32a7e7893b6112742b" exitCode=0 Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.279271 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c9e7-account-create-5fpb9" event={"ID":"a3a83717-7d8b-4175-82db-106b634368b0","Type":"ContainerDied","Data":"56c3d96d5c34655ac8e4ef1fafa79029a2d8de902b0fbe32a7e7893b6112742b"} Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.279321 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c9e7-account-create-5fpb9" event={"ID":"a3a83717-7d8b-4175-82db-106b634368b0","Type":"ContainerStarted","Data":"f8f0f3754baf0eff83f67ca1bbb94c5fcc782bbc400fe5ebe509f2117d1196a0"} Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.285134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" event={"ID":"a28faa94-62ae-49fa-8f01-8dff4989044e","Type":"ContainerStarted","Data":"4d198bf75617b597aedd40c47c2c65654c4d7130efa0a286b52ecdf95bba44d6"} Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.285413 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.287992 4805 generic.go:334] "Generic (PLEG): container finished" podID="66f57d62-e317-4e4b-ac87-b56d6ece0564" containerID="140a39ad1a22edb05009c51b3050ff9138f2740544805dda33f32fa10cc5a526" exitCode=0 Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.288040 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-514d-account-create-2rk67" event={"ID":"66f57d62-e317-4e4b-ac87-b56d6ece0564","Type":"ContainerDied","Data":"140a39ad1a22edb05009c51b3050ff9138f2740544805dda33f32fa10cc5a526"} Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.288057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-514d-account-create-2rk67" event={"ID":"66f57d62-e317-4e4b-ac87-b56d6ece0564","Type":"ContainerStarted","Data":"094cece2a0737aa2afc03620796c0b431ffc01117a4ddd0b8974e3c21caac499"} Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.290341 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" event={"ID":"d960318b-f654-493a-bc0d-48760f738455","Type":"ContainerStarted","Data":"659406396d073ad7a213223a5c39690649f10afa6eb3d2d3bcffa33538990b27"} Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.290368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" event={"ID":"d960318b-f654-493a-bc0d-48760f738455","Type":"ContainerStarted","Data":"42b7159eebad831ba86b7d2bee24c73e46cbc9ba5a19510d7e7b2575a7f5c1b2"} Dec 16 12:15:01 crc kubenswrapper[4805]: I1216 12:15:01.362105 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" podStartSLOduration=1.362088711 podStartE2EDuration="1.362088711s" podCreationTimestamp="2025-12-16 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:01.356253323 +0000 UTC m=+1175.074511148" watchObservedRunningTime="2025-12-16 12:15:01.362088711 +0000 UTC m=+1175.080346536" Dec 16 12:15:02 crc kubenswrapper[4805]: I1216 12:15:02.301069 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" event={"ID":"d960318b-f654-493a-bc0d-48760f738455","Type":"ContainerDied","Data":"659406396d073ad7a213223a5c39690649f10afa6eb3d2d3bcffa33538990b27"} Dec 16 12:15:02 crc kubenswrapper[4805]: I1216 12:15:02.301006 4805 generic.go:334] "Generic (PLEG): container finished" podID="d960318b-f654-493a-bc0d-48760f738455" containerID="659406396d073ad7a213223a5c39690649f10afa6eb3d2d3bcffa33538990b27" exitCode=0 Dec 16 12:15:02 crc kubenswrapper[4805]: I1216 12:15:02.326271 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" podStartSLOduration=11.326247668 podStartE2EDuration="11.326247668s" podCreationTimestamp="2025-12-16 12:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:01.385444561 +0000 UTC m=+1175.103702386" watchObservedRunningTime="2025-12-16 12:15:02.326247668 +0000 UTC m=+1176.044505483" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:02.758670 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-514d-account-create-2rk67" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:02.782639 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3d76-account-create-npbtw" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:02.788152 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c9e7-account-create-5fpb9" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:02.860701 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zqkx\" (UniqueName: \"kubernetes.io/projected/66f57d62-e317-4e4b-ac87-b56d6ece0564-kube-api-access-7zqkx\") pod \"66f57d62-e317-4e4b-ac87-b56d6ece0564\" (UID: \"66f57d62-e317-4e4b-ac87-b56d6ece0564\") " Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:02.866871 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f57d62-e317-4e4b-ac87-b56d6ece0564-kube-api-access-7zqkx" (OuterVolumeSpecName: "kube-api-access-7zqkx") pod "66f57d62-e317-4e4b-ac87-b56d6ece0564" (UID: "66f57d62-e317-4e4b-ac87-b56d6ece0564"). InnerVolumeSpecName "kube-api-access-7zqkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:02.962418 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnplb\" (UniqueName: \"kubernetes.io/projected/47d14a2a-df4a-45be-b675-9a69ed1e8d45-kube-api-access-fnplb\") pod \"47d14a2a-df4a-45be-b675-9a69ed1e8d45\" (UID: \"47d14a2a-df4a-45be-b675-9a69ed1e8d45\") " Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:02.962490 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxqqk\" (UniqueName: \"kubernetes.io/projected/a3a83717-7d8b-4175-82db-106b634368b0-kube-api-access-mxqqk\") pod \"a3a83717-7d8b-4175-82db-106b634368b0\" (UID: \"a3a83717-7d8b-4175-82db-106b634368b0\") " Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:02.963112 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zqkx\" (UniqueName: \"kubernetes.io/projected/66f57d62-e317-4e4b-ac87-b56d6ece0564-kube-api-access-7zqkx\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:02.965995 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d14a2a-df4a-45be-b675-9a69ed1e8d45-kube-api-access-fnplb" (OuterVolumeSpecName: "kube-api-access-fnplb") pod "47d14a2a-df4a-45be-b675-9a69ed1e8d45" (UID: "47d14a2a-df4a-45be-b675-9a69ed1e8d45"). InnerVolumeSpecName "kube-api-access-fnplb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:02.966362 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3a83717-7d8b-4175-82db-106b634368b0-kube-api-access-mxqqk" (OuterVolumeSpecName: "kube-api-access-mxqqk") pod "a3a83717-7d8b-4175-82db-106b634368b0" (UID: "a3a83717-7d8b-4175-82db-106b634368b0"). InnerVolumeSpecName "kube-api-access-mxqqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.065218 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnplb\" (UniqueName: \"kubernetes.io/projected/47d14a2a-df4a-45be-b675-9a69ed1e8d45-kube-api-access-fnplb\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.065256 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxqqk\" (UniqueName: \"kubernetes.io/projected/a3a83717-7d8b-4175-82db-106b634368b0-kube-api-access-mxqqk\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.322957 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3d76-account-create-npbtw" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.324901 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3d76-account-create-npbtw" event={"ID":"47d14a2a-df4a-45be-b675-9a69ed1e8d45","Type":"ContainerDied","Data":"37d893a34f9566ceb466d9c40266b561fe14cfa1920aaa178ed34cae44d10502"} Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.324938 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37d893a34f9566ceb466d9c40266b561fe14cfa1920aaa178ed34cae44d10502" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.333274 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c9e7-account-create-5fpb9" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.333392 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c9e7-account-create-5fpb9" event={"ID":"a3a83717-7d8b-4175-82db-106b634368b0","Type":"ContainerDied","Data":"f8f0f3754baf0eff83f67ca1bbb94c5fcc782bbc400fe5ebe509f2117d1196a0"} Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.333431 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8f0f3754baf0eff83f67ca1bbb94c5fcc782bbc400fe5ebe509f2117d1196a0" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.336630 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-514d-account-create-2rk67" event={"ID":"66f57d62-e317-4e4b-ac87-b56d6ece0564","Type":"ContainerDied","Data":"094cece2a0737aa2afc03620796c0b431ffc01117a4ddd0b8974e3c21caac499"} Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.336662 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="094cece2a0737aa2afc03620796c0b431ffc01117a4ddd0b8974e3c21caac499" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.336811 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-514d-account-create-2rk67" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.710707 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.880827 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d960318b-f654-493a-bc0d-48760f738455-secret-volume\") pod \"d960318b-f654-493a-bc0d-48760f738455\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.880989 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d960318b-f654-493a-bc0d-48760f738455-config-volume\") pod \"d960318b-f654-493a-bc0d-48760f738455\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.881059 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmpk9\" (UniqueName: \"kubernetes.io/projected/d960318b-f654-493a-bc0d-48760f738455-kube-api-access-rmpk9\") pod \"d960318b-f654-493a-bc0d-48760f738455\" (UID: \"d960318b-f654-493a-bc0d-48760f738455\") " Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.881896 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d960318b-f654-493a-bc0d-48760f738455-config-volume" (OuterVolumeSpecName: "config-volume") pod "d960318b-f654-493a-bc0d-48760f738455" (UID: "d960318b-f654-493a-bc0d-48760f738455"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.901004 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d960318b-f654-493a-bc0d-48760f738455-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d960318b-f654-493a-bc0d-48760f738455" (UID: "d960318b-f654-493a-bc0d-48760f738455"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.901083 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d960318b-f654-493a-bc0d-48760f738455-kube-api-access-rmpk9" (OuterVolumeSpecName: "kube-api-access-rmpk9") pod "d960318b-f654-493a-bc0d-48760f738455" (UID: "d960318b-f654-493a-bc0d-48760f738455"). InnerVolumeSpecName "kube-api-access-rmpk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.982573 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d960318b-f654-493a-bc0d-48760f738455-config-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.982636 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmpk9\" (UniqueName: \"kubernetes.io/projected/d960318b-f654-493a-bc0d-48760f738455-kube-api-access-rmpk9\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:03 crc kubenswrapper[4805]: I1216 12:15:03.982652 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d960318b-f654-493a-bc0d-48760f738455-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:04 crc kubenswrapper[4805]: I1216 12:15:04.344502 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" Dec 16 12:15:04 crc kubenswrapper[4805]: I1216 12:15:04.344416 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx" event={"ID":"d960318b-f654-493a-bc0d-48760f738455","Type":"ContainerDied","Data":"42b7159eebad831ba86b7d2bee24c73e46cbc9ba5a19510d7e7b2575a7f5c1b2"} Dec 16 12:15:04 crc kubenswrapper[4805]: I1216 12:15:04.349331 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42b7159eebad831ba86b7d2bee24c73e46cbc9ba5a19510d7e7b2575a7f5c1b2" Dec 16 12:15:05 crc kubenswrapper[4805]: I1216 12:15:05.353376 4805 generic.go:334] "Generic (PLEG): container finished" podID="b60b01ba-b205-4451-8a28-33dfb9845ff2" containerID="049de569c4c845d1fb82d31f4b08259fa6f699761843bf03f197d811f621389c" exitCode=0 Dec 16 12:15:05 crc kubenswrapper[4805]: I1216 12:15:05.353463 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qckpn" event={"ID":"b60b01ba-b205-4451-8a28-33dfb9845ff2","Type":"ContainerDied","Data":"049de569c4c845d1fb82d31f4b08259fa6f699761843bf03f197d811f621389c"} Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.632490 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qckpn" Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.726276 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-config-data\") pod \"b60b01ba-b205-4451-8a28-33dfb9845ff2\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.726383 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-combined-ca-bundle\") pod \"b60b01ba-b205-4451-8a28-33dfb9845ff2\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.726440 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpmrq\" (UniqueName: \"kubernetes.io/projected/b60b01ba-b205-4451-8a28-33dfb9845ff2-kube-api-access-cpmrq\") pod \"b60b01ba-b205-4451-8a28-33dfb9845ff2\" (UID: \"b60b01ba-b205-4451-8a28-33dfb9845ff2\") " Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.734936 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60b01ba-b205-4451-8a28-33dfb9845ff2-kube-api-access-cpmrq" (OuterVolumeSpecName: "kube-api-access-cpmrq") pod "b60b01ba-b205-4451-8a28-33dfb9845ff2" (UID: "b60b01ba-b205-4451-8a28-33dfb9845ff2"). InnerVolumeSpecName "kube-api-access-cpmrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.768725 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b60b01ba-b205-4451-8a28-33dfb9845ff2" (UID: "b60b01ba-b205-4451-8a28-33dfb9845ff2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.788156 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-config-data" (OuterVolumeSpecName: "config-data") pod "b60b01ba-b205-4451-8a28-33dfb9845ff2" (UID: "b60b01ba-b205-4451-8a28-33dfb9845ff2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.828601 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.828633 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b60b01ba-b205-4451-8a28-33dfb9845ff2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.828643 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpmrq\" (UniqueName: \"kubernetes.io/projected/b60b01ba-b205-4451-8a28-33dfb9845ff2-kube-api-access-cpmrq\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.869291 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.922857 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqth5"] Dec 16 12:15:06 crc kubenswrapper[4805]: I1216 12:15:06.923160 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-jqth5" podUID="a2cc735d-1342-4b12-a93e-d115daf8c3f3" containerName="dnsmasq-dns" containerID="cri-o://375cbe045015783db95ff1ad9b5888a35c69c91f6c9bf21d12078186d8d9feb1" gracePeriod=10 Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.371413 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qckpn" event={"ID":"b60b01ba-b205-4451-8a28-33dfb9845ff2","Type":"ContainerDied","Data":"6abf53e001595359dd7eb757ee3362df4a09e366026e72b1d0ae12b12944e3ea"} Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.371456 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6abf53e001595359dd7eb757ee3362df4a09e366026e72b1d0ae12b12944e3ea" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.371476 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qckpn" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.373784 4805 generic.go:334] "Generic (PLEG): container finished" podID="a2cc735d-1342-4b12-a93e-d115daf8c3f3" containerID="375cbe045015783db95ff1ad9b5888a35c69c91f6c9bf21d12078186d8d9feb1" exitCode=0 Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.373816 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jqth5" event={"ID":"a2cc735d-1342-4b12-a93e-d115daf8c3f3","Type":"ContainerDied","Data":"375cbe045015783db95ff1ad9b5888a35c69c91f6c9bf21d12078186d8d9feb1"} Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.702802 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zmd7t"] Dec 16 12:15:07 crc kubenswrapper[4805]: E1216 12:15:07.703273 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a83717-7d8b-4175-82db-106b634368b0" containerName="mariadb-account-create" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.703292 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a83717-7d8b-4175-82db-106b634368b0" containerName="mariadb-account-create" Dec 16 12:15:07 crc kubenswrapper[4805]: E1216 12:15:07.703317 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60b01ba-b205-4451-8a28-33dfb9845ff2" containerName="keystone-db-sync" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.703325 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60b01ba-b205-4451-8a28-33dfb9845ff2" containerName="keystone-db-sync" Dec 16 12:15:07 crc kubenswrapper[4805]: E1216 12:15:07.703344 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d14a2a-df4a-45be-b675-9a69ed1e8d45" containerName="mariadb-account-create" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.703353 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d14a2a-df4a-45be-b675-9a69ed1e8d45" containerName="mariadb-account-create" Dec 16 12:15:07 crc kubenswrapper[4805]: E1216 12:15:07.703370 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d960318b-f654-493a-bc0d-48760f738455" containerName="collect-profiles" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.703378 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d960318b-f654-493a-bc0d-48760f738455" containerName="collect-profiles" Dec 16 12:15:07 crc kubenswrapper[4805]: E1216 12:15:07.703390 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f57d62-e317-4e4b-ac87-b56d6ece0564" containerName="mariadb-account-create" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.703397 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f57d62-e317-4e4b-ac87-b56d6ece0564" containerName="mariadb-account-create" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.703590 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b60b01ba-b205-4451-8a28-33dfb9845ff2" containerName="keystone-db-sync" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.703614 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a83717-7d8b-4175-82db-106b634368b0" containerName="mariadb-account-create" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.703625 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d14a2a-df4a-45be-b675-9a69ed1e8d45" containerName="mariadb-account-create" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.703636 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d960318b-f654-493a-bc0d-48760f738455" containerName="collect-profiles" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.703654 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f57d62-e317-4e4b-ac87-b56d6ece0564" containerName="mariadb-account-create" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.704413 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.710255 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.710374 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.712290 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-d6pz6" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.712498 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.761194 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-d4bjb"] Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.763027 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.801070 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-d4bjb"] Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.847857 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zmd7t"] Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848573 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-combined-ca-bundle\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848615 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-config\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848661 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-svc\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848704 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-credential-keys\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848742 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-scripts\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848788 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848824 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848887 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nhhz\" (UniqueName: \"kubernetes.io/projected/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-kube-api-access-4nhhz\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848913 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzvcs\" (UniqueName: \"kubernetes.io/projected/287ae472-06a8-4cc7-8508-1227342e727a-kube-api-access-lzvcs\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848952 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-fernet-keys\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.848975 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-config-data\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.950944 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-svc\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.951012 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-credential-keys\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.951042 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-scripts\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.951074 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.951108 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.951152 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nhhz\" (UniqueName: \"kubernetes.io/projected/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-kube-api-access-4nhhz\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.951173 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzvcs\" (UniqueName: \"kubernetes.io/projected/287ae472-06a8-4cc7-8508-1227342e727a-kube-api-access-lzvcs\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.951212 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-fernet-keys\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.951232 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-config-data\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.951263 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.951285 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-combined-ca-bundle\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.951306 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-config\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.952353 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-config\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.953307 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.954181 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.954850 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-svc\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.955533 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.960588 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-scripts\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.961465 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-credential-keys\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.965600 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-combined-ca-bundle\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.990640 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f4f4b6f8f-m9ldv"] Dec 16 12:15:07 crc kubenswrapper[4805]: I1216 12:15:07.991981 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:07.999848 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-config-data\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.000319 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-fernet-keys\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.041691 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.044549 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.044785 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-bxj5c" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.044896 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.058869 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nhhz\" (UniqueName: \"kubernetes.io/projected/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-kube-api-access-4nhhz\") pod \"dnsmasq-dns-55fff446b9-d4bjb\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.085080 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzvcs\" (UniqueName: \"kubernetes.io/projected/287ae472-06a8-4cc7-8508-1227342e727a-kube-api-access-lzvcs\") pod \"keystone-bootstrap-zmd7t\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.086210 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.149260 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f4f4b6f8f-m9ldv"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.160382 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4a288232-35f9-4e02-bc87-2c7af60a884b-horizon-secret-key\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.160441 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-config-data\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.160466 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vqrh\" (UniqueName: \"kubernetes.io/projected/4a288232-35f9-4e02-bc87-2c7af60a884b-kube-api-access-4vqrh\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.160529 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a288232-35f9-4e02-bc87-2c7af60a884b-logs\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.160557 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-scripts\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.189360 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sljmv"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.190440 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sljmv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.198699 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tk2md" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.207201 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.233104 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.240090 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sljmv"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.255905 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9v2nl"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.257077 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.262308 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4a288232-35f9-4e02-bc87-2c7af60a884b-horizon-secret-key\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.262421 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-config-data\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.262491 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vqrh\" (UniqueName: \"kubernetes.io/projected/4a288232-35f9-4e02-bc87-2c7af60a884b-kube-api-access-4vqrh\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.262582 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-combined-ca-bundle\") pod \"neutron-db-sync-sljmv\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " pod="openstack/neutron-db-sync-sljmv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.262670 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdngh\" (UniqueName: \"kubernetes.io/projected/8bbda936-b96e-491c-9d8c-1e4595a42566-kube-api-access-qdngh\") pod \"neutron-db-sync-sljmv\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " pod="openstack/neutron-db-sync-sljmv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.263479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a288232-35f9-4e02-bc87-2c7af60a884b-logs\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.263604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-scripts\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.263781 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-config\") pod \"neutron-db-sync-sljmv\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " pod="openstack/neutron-db-sync-sljmv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.264622 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a288232-35f9-4e02-bc87-2c7af60a884b-logs\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.265261 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-config-data\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.265472 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-scripts\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.270249 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.270378 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ql6ss" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.270426 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.286636 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4a288232-35f9-4e02-bc87-2c7af60a884b-horizon-secret-key\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.309795 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9v2nl"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.332532 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.344953 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-d4bjb"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.368127 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vqrh\" (UniqueName: \"kubernetes.io/projected/4a288232-35f9-4e02-bc87-2c7af60a884b-kube-api-access-4vqrh\") pod \"horizon-6f4f4b6f8f-m9ldv\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.369340 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x8xx\" (UniqueName: \"kubernetes.io/projected/aa98eb47-6335-4544-b5d6-718b55075000-kube-api-access-2x8xx\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.369383 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-config\") pod \"neutron-db-sync-sljmv\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " pod="openstack/neutron-db-sync-sljmv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.369407 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-scripts\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.369462 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-config-data\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.369499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-combined-ca-bundle\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.369534 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-combined-ca-bundle\") pod \"neutron-db-sync-sljmv\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " pod="openstack/neutron-db-sync-sljmv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.369552 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-db-sync-config-data\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.369570 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa98eb47-6335-4544-b5d6-718b55075000-etc-machine-id\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.369607 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdngh\" (UniqueName: \"kubernetes.io/projected/8bbda936-b96e-491c-9d8c-1e4595a42566-kube-api-access-qdngh\") pod \"neutron-db-sync-sljmv\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " pod="openstack/neutron-db-sync-sljmv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.381907 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-combined-ca-bundle\") pod \"neutron-db-sync-sljmv\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " pod="openstack/neutron-db-sync-sljmv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.395529 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-config\") pod \"neutron-db-sync-sljmv\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " pod="openstack/neutron-db-sync-sljmv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.413406 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-l88fx"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.416214 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdngh\" (UniqueName: \"kubernetes.io/projected/8bbda936-b96e-491c-9d8c-1e4595a42566-kube-api-access-qdngh\") pod \"neutron-db-sync-sljmv\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " pod="openstack/neutron-db-sync-sljmv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.424201 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l88fx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.427773 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4z59v" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.428217 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.451316 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l88fx"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.455133 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.473649 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-combined-ca-bundle\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.473748 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-db-sync-config-data\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.473775 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa98eb47-6335-4544-b5d6-718b55075000-etc-machine-id\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.473880 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x8xx\" (UniqueName: \"kubernetes.io/projected/aa98eb47-6335-4544-b5d6-718b55075000-kube-api-access-2x8xx\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.473905 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-scripts\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.473930 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-config-data\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.479692 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa98eb47-6335-4544-b5d6-718b55075000-etc-machine-id\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.487053 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-combined-ca-bundle\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.489546 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-654b85648c-wz8kq"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.490835 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-config-data\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.491337 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.493047 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-db-sync-config-data\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.493565 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-scripts\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.575595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-combined-ca-bundle\") pod \"barbican-db-sync-l88fx\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " pod="openstack/barbican-db-sync-l88fx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.575665 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-scripts\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.575687 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5t2p\" (UniqueName: \"kubernetes.io/projected/bedb416e-1423-4c43-8676-f6843c51c7b0-kube-api-access-m5t2p\") pod \"barbican-db-sync-l88fx\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " pod="openstack/barbican-db-sync-l88fx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.575729 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-db-sync-config-data\") pod \"barbican-db-sync-l88fx\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " pod="openstack/barbican-db-sync-l88fx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.575768 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d088a1e-1b68-444c-9155-016204abfaf9-logs\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.575813 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d088a1e-1b68-444c-9155-016204abfaf9-horizon-secret-key\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.575929 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-config-data\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.575984 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4spx\" (UniqueName: \"kubernetes.io/projected/7d088a1e-1b68-444c-9155-016204abfaf9-kube-api-access-g4spx\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.592647 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sljmv" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.597454 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2th2m"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.663090 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x8xx\" (UniqueName: \"kubernetes.io/projected/aa98eb47-6335-4544-b5d6-718b55075000-kube-api-access-2x8xx\") pod \"cinder-db-sync-9v2nl\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.731275 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-654b85648c-wz8kq"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.731432 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.759233 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kx4cl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.763772 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4spx\" (UniqueName: \"kubernetes.io/projected/7d088a1e-1b68-444c-9155-016204abfaf9-kube-api-access-g4spx\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.763970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-combined-ca-bundle\") pod \"barbican-db-sync-l88fx\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " pod="openstack/barbican-db-sync-l88fx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.764022 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-scripts\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.764138 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5t2p\" (UniqueName: \"kubernetes.io/projected/bedb416e-1423-4c43-8676-f6843c51c7b0-kube-api-access-m5t2p\") pod \"barbican-db-sync-l88fx\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " pod="openstack/barbican-db-sync-l88fx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.764198 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-db-sync-config-data\") pod \"barbican-db-sync-l88fx\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " pod="openstack/barbican-db-sync-l88fx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.764334 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d088a1e-1b68-444c-9155-016204abfaf9-logs\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.764413 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d088a1e-1b68-444c-9155-016204abfaf9-horizon-secret-key\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.764689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-config-data\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.767958 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-config-data\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.779484 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-scripts\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.779745 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d088a1e-1b68-444c-9155-016204abfaf9-logs\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.786975 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.787467 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.788634 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-db-sync-config-data\") pod \"barbican-db-sync-l88fx\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " pod="openstack/barbican-db-sync-l88fx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.811247 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-combined-ca-bundle\") pod \"barbican-db-sync-l88fx\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " pod="openstack/barbican-db-sync-l88fx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.817453 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d088a1e-1b68-444c-9155-016204abfaf9-horizon-secret-key\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.843791 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4spx\" (UniqueName: \"kubernetes.io/projected/7d088a1e-1b68-444c-9155-016204abfaf9-kube-api-access-g4spx\") pod \"horizon-654b85648c-wz8kq\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.848710 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5t2p\" (UniqueName: \"kubernetes.io/projected/bedb416e-1423-4c43-8676-f6843c51c7b0-kube-api-access-m5t2p\") pod \"barbican-db-sync-l88fx\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " pod="openstack/barbican-db-sync-l88fx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.856819 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.869638 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-scripts\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.869987 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-logs\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.870012 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-config-data\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.870062 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-combined-ca-bundle\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.870117 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbkbl\" (UniqueName: \"kubernetes.io/projected/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-kube-api-access-kbkbl\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.913363 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.926869 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-287gx"] Dec 16 12:15:08 crc kubenswrapper[4805]: E1216 12:15:08.928072 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cc735d-1342-4b12-a93e-d115daf8c3f3" containerName="dnsmasq-dns" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.928099 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cc735d-1342-4b12-a93e-d115daf8c3f3" containerName="dnsmasq-dns" Dec 16 12:15:08 crc kubenswrapper[4805]: E1216 12:15:08.928135 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cc735d-1342-4b12-a93e-d115daf8c3f3" containerName="init" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.928172 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cc735d-1342-4b12-a93e-d115daf8c3f3" containerName="init" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.928488 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2cc735d-1342-4b12-a93e-d115daf8c3f3" containerName="dnsmasq-dns" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.930300 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.937988 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2th2m"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.951194 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.956268 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.962525 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.962866 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-287gx"] Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.962948 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.971380 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-sb\") pod \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.971598 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-nb\") pod \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.971645 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ht2k\" (UniqueName: \"kubernetes.io/projected/a2cc735d-1342-4b12-a93e-d115daf8c3f3-kube-api-access-7ht2k\") pod \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.971824 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-dns-svc\") pod \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.972187 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-config\") pod \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\" (UID: \"a2cc735d-1342-4b12-a93e-d115daf8c3f3\") " Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.980528 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2cc735d-1342-4b12-a93e-d115daf8c3f3-kube-api-access-7ht2k" (OuterVolumeSpecName: "kube-api-access-7ht2k") pod "a2cc735d-1342-4b12-a93e-d115daf8c3f3" (UID: "a2cc735d-1342-4b12-a93e-d115daf8c3f3"). InnerVolumeSpecName "kube-api-access-7ht2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.985744 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-logs\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.985789 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-config-data\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.985809 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.985857 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-combined-ca-bundle\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.985884 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.986917 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-logs\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.987268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbkbl\" (UniqueName: \"kubernetes.io/projected/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-kube-api-access-kbkbl\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.987657 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.987734 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2q2p\" (UniqueName: \"kubernetes.io/projected/8c623e3f-b44e-4d37-add1-c5626879007d-kube-api-access-t2q2p\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.987823 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-config\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.987903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-scripts\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.987937 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.988023 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ht2k\" (UniqueName: \"kubernetes.io/projected/a2cc735d-1342-4b12-a93e-d115daf8c3f3-kube-api-access-7ht2k\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.992182 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-config-data\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:08 crc kubenswrapper[4805]: I1216 12:15:08.997700 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-combined-ca-bundle\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.008023 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbkbl\" (UniqueName: \"kubernetes.io/projected/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-kube-api-access-kbkbl\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.008230 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-scripts\") pod \"placement-db-sync-2th2m\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.017315 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.087291 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a2cc735d-1342-4b12-a93e-d115daf8c3f3" (UID: "a2cc735d-1342-4b12-a93e-d115daf8c3f3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.087968 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2cc735d-1342-4b12-a93e-d115daf8c3f3" (UID: "a2cc735d-1342-4b12-a93e-d115daf8c3f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089045 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-config\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089099 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-config-data\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089151 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-scripts\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089192 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089258 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089293 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pscpn\" (UniqueName: \"kubernetes.io/projected/a26a23d3-144a-4e2d-8ce2-63a63da575ff-kube-api-access-pscpn\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089341 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-log-httpd\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089365 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089403 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089443 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-run-httpd\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089467 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089495 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2q2p\" (UniqueName: \"kubernetes.io/projected/8c623e3f-b44e-4d37-add1-c5626879007d-kube-api-access-t2q2p\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089565 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.089577 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.093361 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.093850 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.094131 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.094345 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-config\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.095615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.110945 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l88fx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.115483 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.128317 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a2cc735d-1342-4b12-a93e-d115daf8c3f3" (UID: "a2cc735d-1342-4b12-a93e-d115daf8c3f3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.131084 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2q2p\" (UniqueName: \"kubernetes.io/projected/8c623e3f-b44e-4d37-add1-c5626879007d-kube-api-access-t2q2p\") pod \"dnsmasq-dns-76fcf4b695-287gx\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.179588 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-config" (OuterVolumeSpecName: "config") pod "a2cc735d-1342-4b12-a93e-d115daf8c3f3" (UID: "a2cc735d-1342-4b12-a93e-d115daf8c3f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.191553 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2th2m" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.192549 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.192885 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-run-httpd\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.193025 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-config-data\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.193071 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-scripts\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.193176 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pscpn\" (UniqueName: \"kubernetes.io/projected/a26a23d3-144a-4e2d-8ce2-63a63da575ff-kube-api-access-pscpn\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.193196 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.193213 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-log-httpd\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.193244 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-run-httpd\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.193260 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.193270 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2cc735d-1342-4b12-a93e-d115daf8c3f3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.193672 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-log-httpd\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.196597 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-scripts\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.198713 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.200800 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.203567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-config-data\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.217723 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pscpn\" (UniqueName: \"kubernetes.io/projected/a26a23d3-144a-4e2d-8ce2-63a63da575ff-kube-api-access-pscpn\") pod \"ceilometer-0\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.220555 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-d4bjb"] Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.240276 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zmd7t"] Dec 16 12:15:09 crc kubenswrapper[4805]: W1216 12:15:09.252462 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf0e83f1_6df5_4656_9c8e_9855d445c7c5.slice/crio-22ecffc10ab6b5f0d8665738ffbd2b30c670d35d39b2562bbdc98ca9c7e43e2b WatchSource:0}: Error finding container 22ecffc10ab6b5f0d8665738ffbd2b30c670d35d39b2562bbdc98ca9c7e43e2b: Status 404 returned error can't find the container with id 22ecffc10ab6b5f0d8665738ffbd2b30c670d35d39b2562bbdc98ca9c7e43e2b Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.264726 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.293895 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.406631 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sljmv"] Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.421159 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f4f4b6f8f-m9ldv"] Dec 16 12:15:09 crc kubenswrapper[4805]: W1216 12:15:09.453900 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bbda936_b96e_491c_9d8c_1e4595a42566.slice/crio-2cdb3041af2ef3fea1cfd0ee4f396191cf110fc70e8f3d80b74f6c90a7562da4 WatchSource:0}: Error finding container 2cdb3041af2ef3fea1cfd0ee4f396191cf110fc70e8f3d80b74f6c90a7562da4: Status 404 returned error can't find the container with id 2cdb3041af2ef3fea1cfd0ee4f396191cf110fc70e8f3d80b74f6c90a7562da4 Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.551605 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jqth5" event={"ID":"a2cc735d-1342-4b12-a93e-d115daf8c3f3","Type":"ContainerDied","Data":"4f505838886205fd731874db942ea056738266435ceebfe436941e1b427a5b3e"} Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.551678 4805 scope.go:117] "RemoveContainer" containerID="375cbe045015783db95ff1ad9b5888a35c69c91f6c9bf21d12078186d8d9feb1" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.551815 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jqth5" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.579437 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" event={"ID":"cf0e83f1-6df5-4656-9c8e-9855d445c7c5","Type":"ContainerStarted","Data":"22ecffc10ab6b5f0d8665738ffbd2b30c670d35d39b2562bbdc98ca9c7e43e2b"} Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.589876 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zmd7t" event={"ID":"287ae472-06a8-4cc7-8508-1227342e727a","Type":"ContainerStarted","Data":"7d33e23f65b74f476bfac2a9fd8b25d840ba79b5ea8bb5f9613c558419677918"} Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.709114 4805 scope.go:117] "RemoveContainer" containerID="95bc720f0f0ba7a2c79bfc4c674ebccfc0875c1f3dac2955e4a1955b39c05dd2" Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.749771 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqth5"] Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.787359 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqth5"] Dec 16 12:15:09 crc kubenswrapper[4805]: I1216 12:15:09.826924 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l88fx"] Dec 16 12:15:10 crc kubenswrapper[4805]: W1216 12:15:10.042429 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa98eb47_6335_4544_b5d6_718b55075000.slice/crio-ba702690034b498c0522e55cb045496076531450a67535e9dd61d2d39c024375 WatchSource:0}: Error finding container ba702690034b498c0522e55cb045496076531450a67535e9dd61d2d39c024375: Status 404 returned error can't find the container with id ba702690034b498c0522e55cb045496076531450a67535e9dd61d2d39c024375 Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.049444 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9v2nl"] Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.120739 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2th2m"] Dec 16 12:15:10 crc kubenswrapper[4805]: W1216 12:15:10.143875 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded0d4cec_a7ad_4887_8dc8_e7da47a7c878.slice/crio-186345c497ecaa9af594381d5783fce3466b8a13cbfd7557b541cb8fe21c6fff WatchSource:0}: Error finding container 186345c497ecaa9af594381d5783fce3466b8a13cbfd7557b541cb8fe21c6fff: Status 404 returned error can't find the container with id 186345c497ecaa9af594381d5783fce3466b8a13cbfd7557b541cb8fe21c6fff Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.167964 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-654b85648c-wz8kq"] Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.328342 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-287gx"] Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.348873 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.563491 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2cc735d-1342-4b12-a93e-d115daf8c3f3" path="/var/lib/kubelet/pods/a2cc735d-1342-4b12-a93e-d115daf8c3f3/volumes" Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.619722 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-654b85648c-wz8kq" event={"ID":"7d088a1e-1b68-444c-9155-016204abfaf9","Type":"ContainerStarted","Data":"6741f9cf52b7f1dd7d5fbd9f5c534bfad36d12120d8ae138bca2196bae69198c"} Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.622398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l88fx" event={"ID":"bedb416e-1423-4c43-8676-f6843c51c7b0","Type":"ContainerStarted","Data":"275a55bb31eb7b116c2887e9cb35be196db04085a23e04d096e2f439d877790e"} Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.636345 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a26a23d3-144a-4e2d-8ce2-63a63da575ff","Type":"ContainerStarted","Data":"5fdafa2282a66f659c414130a8989b6a02f4a61d6c35f5fdf7e2a818ed1ddeb7"} Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.672127 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" event={"ID":"8c623e3f-b44e-4d37-add1-c5626879007d","Type":"ContainerStarted","Data":"9cf3a7fd48a8df14ec76f7c438dfa5a56e81a9417ddefdf7158a41cf72999bf8"} Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.680247 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9v2nl" event={"ID":"aa98eb47-6335-4544-b5d6-718b55075000","Type":"ContainerStarted","Data":"ba702690034b498c0522e55cb045496076531450a67535e9dd61d2d39c024375"} Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.686051 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sljmv" event={"ID":"8bbda936-b96e-491c-9d8c-1e4595a42566","Type":"ContainerStarted","Data":"2cdb3041af2ef3fea1cfd0ee4f396191cf110fc70e8f3d80b74f6c90a7562da4"} Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.693988 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f4f4b6f8f-m9ldv" event={"ID":"4a288232-35f9-4e02-bc87-2c7af60a884b","Type":"ContainerStarted","Data":"b5401c5d02f54043ddce81329c1514a38d114e051be393857383825112247212"} Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.697810 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2th2m" event={"ID":"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878","Type":"ContainerStarted","Data":"186345c497ecaa9af594381d5783fce3466b8a13cbfd7557b541cb8fe21c6fff"} Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.761485 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f4f4b6f8f-m9ldv"] Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.818211 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6698d6cf45-z7twf"] Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.820008 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.837253 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.849654 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6698d6cf45-z7twf"] Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.989375 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e75272e8-6166-46a2-a220-dc6e64301e77-logs\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.989697 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-config-data\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.989828 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e75272e8-6166-46a2-a220-dc6e64301e77-horizon-secret-key\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.989904 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjfqx\" (UniqueName: \"kubernetes.io/projected/e75272e8-6166-46a2-a220-dc6e64301e77-kube-api-access-wjfqx\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:10 crc kubenswrapper[4805]: I1216 12:15:10.989927 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-scripts\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.091026 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e75272e8-6166-46a2-a220-dc6e64301e77-horizon-secret-key\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.091123 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjfqx\" (UniqueName: \"kubernetes.io/projected/e75272e8-6166-46a2-a220-dc6e64301e77-kube-api-access-wjfqx\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.091168 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-scripts\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.091233 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e75272e8-6166-46a2-a220-dc6e64301e77-logs\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.091259 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-config-data\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.092599 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-config-data\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.094027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e75272e8-6166-46a2-a220-dc6e64301e77-logs\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.094217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-scripts\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.127391 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjfqx\" (UniqueName: \"kubernetes.io/projected/e75272e8-6166-46a2-a220-dc6e64301e77-kube-api-access-wjfqx\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.147187 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e75272e8-6166-46a2-a220-dc6e64301e77-horizon-secret-key\") pod \"horizon-6698d6cf45-z7twf\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.216798 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.713588 4805 generic.go:334] "Generic (PLEG): container finished" podID="cf0e83f1-6df5-4656-9c8e-9855d445c7c5" containerID="f90e95f2233b0126822942827d26a566fd1076c71c1ee7f9c717299d701b5694" exitCode=0 Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.713921 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" event={"ID":"cf0e83f1-6df5-4656-9c8e-9855d445c7c5","Type":"ContainerDied","Data":"f90e95f2233b0126822942827d26a566fd1076c71c1ee7f9c717299d701b5694"} Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.715696 4805 generic.go:334] "Generic (PLEG): container finished" podID="8c623e3f-b44e-4d37-add1-c5626879007d" containerID="406ab79cfcdcde8d447df52c3f2343f0ce1fa3fd9d6cddfc36a65eff44fc2a3f" exitCode=0 Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.715764 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" event={"ID":"8c623e3f-b44e-4d37-add1-c5626879007d","Type":"ContainerDied","Data":"406ab79cfcdcde8d447df52c3f2343f0ce1fa3fd9d6cddfc36a65eff44fc2a3f"} Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.722343 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sljmv" event={"ID":"8bbda936-b96e-491c-9d8c-1e4595a42566","Type":"ContainerStarted","Data":"577888b0a6ff989ca1016b9930991e75bc4527c3bbae56a147fa1e7bcda1b549"} Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.767618 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zmd7t" event={"ID":"287ae472-06a8-4cc7-8508-1227342e727a","Type":"ContainerStarted","Data":"1b205051b301473d43bda995381a061cd714e8ba20e2b4d2ed5487efd3a6c9ff"} Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.770898 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6698d6cf45-z7twf"] Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.828489 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sljmv" podStartSLOduration=3.828463983 podStartE2EDuration="3.828463983s" podCreationTimestamp="2025-12-16 12:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:11.827098924 +0000 UTC m=+1185.545356739" watchObservedRunningTime="2025-12-16 12:15:11.828463983 +0000 UTC m=+1185.546721798" Dec 16 12:15:11 crc kubenswrapper[4805]: I1216 12:15:11.886818 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zmd7t" podStartSLOduration=4.886787096 podStartE2EDuration="4.886787096s" podCreationTimestamp="2025-12-16 12:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:11.871652682 +0000 UTC m=+1185.589910487" watchObservedRunningTime="2025-12-16 12:15:11.886787096 +0000 UTC m=+1185.605044921" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.220008 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.313176 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nhhz\" (UniqueName: \"kubernetes.io/projected/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-kube-api-access-4nhhz\") pod \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.313216 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-nb\") pod \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.313292 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-swift-storage-0\") pod \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.313315 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-svc\") pod \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.313342 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-config\") pod \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.317971 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-sb\") pod \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\" (UID: \"cf0e83f1-6df5-4656-9c8e-9855d445c7c5\") " Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.319892 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-kube-api-access-4nhhz" (OuterVolumeSpecName: "kube-api-access-4nhhz") pod "cf0e83f1-6df5-4656-9c8e-9855d445c7c5" (UID: "cf0e83f1-6df5-4656-9c8e-9855d445c7c5"). InnerVolumeSpecName "kube-api-access-4nhhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.350297 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf0e83f1-6df5-4656-9c8e-9855d445c7c5" (UID: "cf0e83f1-6df5-4656-9c8e-9855d445c7c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.355960 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf0e83f1-6df5-4656-9c8e-9855d445c7c5" (UID: "cf0e83f1-6df5-4656-9c8e-9855d445c7c5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.364269 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-config" (OuterVolumeSpecName: "config") pod "cf0e83f1-6df5-4656-9c8e-9855d445c7c5" (UID: "cf0e83f1-6df5-4656-9c8e-9855d445c7c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.373219 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf0e83f1-6df5-4656-9c8e-9855d445c7c5" (UID: "cf0e83f1-6df5-4656-9c8e-9855d445c7c5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.375313 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cf0e83f1-6df5-4656-9c8e-9855d445c7c5" (UID: "cf0e83f1-6df5-4656-9c8e-9855d445c7c5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.420621 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.420660 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nhhz\" (UniqueName: \"kubernetes.io/projected/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-kube-api-access-4nhhz\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.420674 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.420685 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.420698 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.420708 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0e83f1-6df5-4656-9c8e-9855d445c7c5-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.791478 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" event={"ID":"cf0e83f1-6df5-4656-9c8e-9855d445c7c5","Type":"ContainerDied","Data":"22ecffc10ab6b5f0d8665738ffbd2b30c670d35d39b2562bbdc98ca9c7e43e2b"} Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.791507 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-d4bjb" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.792023 4805 scope.go:117] "RemoveContainer" containerID="f90e95f2233b0126822942827d26a566fd1076c71c1ee7f9c717299d701b5694" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.801968 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" event={"ID":"8c623e3f-b44e-4d37-add1-c5626879007d","Type":"ContainerStarted","Data":"313a8690e51a1caf309acfd6185f038285126973c4f70c19e188476c4330f178"} Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.803491 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.810447 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6698d6cf45-z7twf" event={"ID":"e75272e8-6166-46a2-a220-dc6e64301e77","Type":"ContainerStarted","Data":"702ab0a087f5f970ed7849735a2561135df4bc0a7a16f78e03f92eb52a6ec8cb"} Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.856226 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-d4bjb"] Dec 16 12:15:12 crc kubenswrapper[4805]: I1216 12:15:12.895613 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-d4bjb"] Dec 16 12:15:14 crc kubenswrapper[4805]: I1216 12:15:14.541849 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf0e83f1-6df5-4656-9c8e-9855d445c7c5" path="/var/lib/kubelet/pods/cf0e83f1-6df5-4656-9c8e-9855d445c7c5/volumes" Dec 16 12:15:14 crc kubenswrapper[4805]: I1216 12:15:14.834085 4805 generic.go:334] "Generic (PLEG): container finished" podID="c5fec641-bdba-4f8a-b6bf-d13721a860d2" containerID="8236a7a863a73ca414f133a79af5c6335a5ab2bfefb38c3305d212f6c1009298" exitCode=0 Dec 16 12:15:14 crc kubenswrapper[4805]: I1216 12:15:14.834234 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x5dc8" event={"ID":"c5fec641-bdba-4f8a-b6bf-d13721a860d2","Type":"ContainerDied","Data":"8236a7a863a73ca414f133a79af5c6335a5ab2bfefb38c3305d212f6c1009298"} Dec 16 12:15:14 crc kubenswrapper[4805]: I1216 12:15:14.860685 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" podStartSLOduration=6.860664983 podStartE2EDuration="6.860664983s" podCreationTimestamp="2025-12-16 12:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:12.912936592 +0000 UTC m=+1186.631194407" watchObservedRunningTime="2025-12-16 12:15:14.860664983 +0000 UTC m=+1188.578922798" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.414109 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x5dc8" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.539954 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-config-data\") pod \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.540311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-combined-ca-bundle\") pod \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.540416 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpcxj\" (UniqueName: \"kubernetes.io/projected/c5fec641-bdba-4f8a-b6bf-d13721a860d2-kube-api-access-dpcxj\") pod \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.540506 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-db-sync-config-data\") pod \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\" (UID: \"c5fec641-bdba-4f8a-b6bf-d13721a860d2\") " Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.570972 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5fec641-bdba-4f8a-b6bf-d13721a860d2-kube-api-access-dpcxj" (OuterVolumeSpecName: "kube-api-access-dpcxj") pod "c5fec641-bdba-4f8a-b6bf-d13721a860d2" (UID: "c5fec641-bdba-4f8a-b6bf-d13721a860d2"). InnerVolumeSpecName "kube-api-access-dpcxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.573870 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c5fec641-bdba-4f8a-b6bf-d13721a860d2" (UID: "c5fec641-bdba-4f8a-b6bf-d13721a860d2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.580431 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5fec641-bdba-4f8a-b6bf-d13721a860d2" (UID: "c5fec641-bdba-4f8a-b6bf-d13721a860d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.643047 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.643088 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpcxj\" (UniqueName: \"kubernetes.io/projected/c5fec641-bdba-4f8a-b6bf-d13721a860d2-kube-api-access-dpcxj\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.643102 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.644439 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-config-data" (OuterVolumeSpecName: "config-data") pod "c5fec641-bdba-4f8a-b6bf-d13721a860d2" (UID: "c5fec641-bdba-4f8a-b6bf-d13721a860d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.675419 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-654b85648c-wz8kq"] Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.734951 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d8fcdbb66-gmhjg"] Dec 16 12:15:16 crc kubenswrapper[4805]: E1216 12:15:16.735328 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5fec641-bdba-4f8a-b6bf-d13721a860d2" containerName="glance-db-sync" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.735348 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5fec641-bdba-4f8a-b6bf-d13721a860d2" containerName="glance-db-sync" Dec 16 12:15:16 crc kubenswrapper[4805]: E1216 12:15:16.735364 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0e83f1-6df5-4656-9c8e-9855d445c7c5" containerName="init" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.735370 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0e83f1-6df5-4656-9c8e-9855d445c7c5" containerName="init" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.735536 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0e83f1-6df5-4656-9c8e-9855d445c7c5" containerName="init" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.735552 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5fec641-bdba-4f8a-b6bf-d13721a860d2" containerName="glance-db-sync" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.736403 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.738256 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.745060 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5fec641-bdba-4f8a-b6bf-d13721a860d2-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.759114 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d8fcdbb66-gmhjg"] Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.810156 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6698d6cf45-z7twf"] Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.842809 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69799999fb-rbm4h"] Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.844223 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.846294 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-tls-certs\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.846339 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-logs\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.846383 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-config-data\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.846458 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-combined-ca-bundle\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.846499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-secret-key\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.846522 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-scripts\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.846538 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndh4p\" (UniqueName: \"kubernetes.io/projected/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-kube-api-access-ndh4p\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.861194 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69799999fb-rbm4h"] Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.885054 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x5dc8" event={"ID":"c5fec641-bdba-4f8a-b6bf-d13721a860d2","Type":"ContainerDied","Data":"8d7c06f0bcbf350f5b858566f0e9b03a92ea2dc1274e7738864206943385faf9"} Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.885100 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d7c06f0bcbf350f5b858566f0e9b03a92ea2dc1274e7738864206943385faf9" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.885222 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x5dc8" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.953571 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-tls-certs\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.953640 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4af4b9e-77b2-4f27-8148-7000d60f2266-scripts\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.953677 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-logs\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.953714 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4af4b9e-77b2-4f27-8148-7000d60f2266-logs\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.953760 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-config-data\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.953782 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4af4b9e-77b2-4f27-8148-7000d60f2266-combined-ca-bundle\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.953805 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4af4b9e-77b2-4f27-8148-7000d60f2266-horizon-secret-key\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.953885 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4af4b9e-77b2-4f27-8148-7000d60f2266-config-data\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.953938 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-combined-ca-bundle\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.953961 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4af4b9e-77b2-4f27-8148-7000d60f2266-horizon-tls-certs\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.954009 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-secret-key\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.954065 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-scripts\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.954086 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndh4p\" (UniqueName: \"kubernetes.io/projected/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-kube-api-access-ndh4p\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.954170 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwx2v\" (UniqueName: \"kubernetes.io/projected/d4af4b9e-77b2-4f27-8148-7000d60f2266-kube-api-access-vwx2v\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.955281 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-logs\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.958891 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-scripts\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.963034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-combined-ca-bundle\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.963610 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-config-data\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.965317 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-tls-certs\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.966332 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-secret-key\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:16 crc kubenswrapper[4805]: I1216 12:15:16.983108 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndh4p\" (UniqueName: \"kubernetes.io/projected/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-kube-api-access-ndh4p\") pod \"horizon-6d8fcdbb66-gmhjg\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.056245 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4af4b9e-77b2-4f27-8148-7000d60f2266-scripts\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.056341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4af4b9e-77b2-4f27-8148-7000d60f2266-logs\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.056389 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4af4b9e-77b2-4f27-8148-7000d60f2266-combined-ca-bundle\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.056413 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4af4b9e-77b2-4f27-8148-7000d60f2266-horizon-secret-key\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.056451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4af4b9e-77b2-4f27-8148-7000d60f2266-config-data\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.056499 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4af4b9e-77b2-4f27-8148-7000d60f2266-horizon-tls-certs\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.056551 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwx2v\" (UniqueName: \"kubernetes.io/projected/d4af4b9e-77b2-4f27-8148-7000d60f2266-kube-api-access-vwx2v\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.056836 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4af4b9e-77b2-4f27-8148-7000d60f2266-logs\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.057529 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4af4b9e-77b2-4f27-8148-7000d60f2266-scripts\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.058819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4af4b9e-77b2-4f27-8148-7000d60f2266-config-data\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.059202 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.061011 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4af4b9e-77b2-4f27-8148-7000d60f2266-horizon-secret-key\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.064629 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4af4b9e-77b2-4f27-8148-7000d60f2266-combined-ca-bundle\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.072790 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4af4b9e-77b2-4f27-8148-7000d60f2266-horizon-tls-certs\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.092968 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwx2v\" (UniqueName: \"kubernetes.io/projected/d4af4b9e-77b2-4f27-8148-7000d60f2266-kube-api-access-vwx2v\") pod \"horizon-69799999fb-rbm4h\" (UID: \"d4af4b9e-77b2-4f27-8148-7000d60f2266\") " pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.167625 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.685085 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-287gx"] Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.685338 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" podUID="8c623e3f-b44e-4d37-add1-c5626879007d" containerName="dnsmasq-dns" containerID="cri-o://313a8690e51a1caf309acfd6185f038285126973c4f70c19e188476c4330f178" gracePeriod=10 Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.688323 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.726428 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rrbmf"] Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.727775 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.767036 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rrbmf"] Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.871546 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.871620 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clvcm\" (UniqueName: \"kubernetes.io/projected/aaba3492-18c7-49bd-90d6-7429d516cb3b-kube-api-access-clvcm\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.871662 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.871683 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.871753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-config\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.871769 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.972984 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.973035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.973122 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-config\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.973182 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.973220 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.973265 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clvcm\" (UniqueName: \"kubernetes.io/projected/aaba3492-18c7-49bd-90d6-7429d516cb3b-kube-api-access-clvcm\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.974046 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-config\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.974046 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.974290 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.974597 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.975272 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:17 crc kubenswrapper[4805]: I1216 12:15:17.991588 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clvcm\" (UniqueName: \"kubernetes.io/projected/aaba3492-18c7-49bd-90d6-7429d516cb3b-kube-api-access-clvcm\") pod \"dnsmasq-dns-8b5c85b87-rrbmf\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:18 crc kubenswrapper[4805]: I1216 12:15:18.046962 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.171814 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.173705 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.179913 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.180135 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.183274 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-r9rpt" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.186777 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.291098 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.291171 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.291197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-logs\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.291237 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-scripts\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.291276 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.291304 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfcdc\" (UniqueName: \"kubernetes.io/projected/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-kube-api-access-mfcdc\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.291323 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-config-data\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.393032 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-scripts\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.393380 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.393444 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfcdc\" (UniqueName: \"kubernetes.io/projected/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-kube-api-access-mfcdc\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.393473 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-config-data\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.393587 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.393626 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.393654 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-logs\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.394404 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.394497 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.397626 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-logs\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.398913 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-scripts\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.399788 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.401724 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-config-data\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.425431 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.432124 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfcdc\" (UniqueName: \"kubernetes.io/projected/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-kube-api-access-mfcdc\") pod \"glance-default-external-api-0\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.506860 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.892939 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.894725 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.897687 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:18.917889 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.006563 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.006620 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.006726 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.006787 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.006828 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.006872 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfp6r\" (UniqueName: \"kubernetes.io/projected/c0124c5d-80e6-4e65-804a-c5a437cda935-kube-api-access-tfp6r\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.006889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.108781 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.108936 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.108970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.109009 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.109061 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.109079 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfp6r\" (UniqueName: \"kubernetes.io/projected/c0124c5d-80e6-4e65-804a-c5a437cda935-kube-api-access-tfp6r\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.109135 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.109442 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.109703 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.109742 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.129286 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.131194 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfp6r\" (UniqueName: \"kubernetes.io/projected/c0124c5d-80e6-4e65-804a-c5a437cda935-kube-api-access-tfp6r\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.131238 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.134836 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.135963 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.219450 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.269150 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" podUID="8c623e3f-b44e-4d37-add1-c5626879007d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.978494 4805 generic.go:334] "Generic (PLEG): container finished" podID="8c623e3f-b44e-4d37-add1-c5626879007d" containerID="313a8690e51a1caf309acfd6185f038285126973c4f70c19e188476c4330f178" exitCode=0 Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:19.979242 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" event={"ID":"8c623e3f-b44e-4d37-add1-c5626879007d","Type":"ContainerDied","Data":"313a8690e51a1caf309acfd6185f038285126973c4f70c19e188476c4330f178"} Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:21.020181 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:15:22 crc kubenswrapper[4805]: I1216 12:15:21.220275 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:15:23 crc kubenswrapper[4805]: I1216 12:15:23.601008 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69799999fb-rbm4h"] Dec 16 12:15:23 crc kubenswrapper[4805]: I1216 12:15:23.618156 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rrbmf"] Dec 16 12:15:23 crc kubenswrapper[4805]: I1216 12:15:23.635525 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d8fcdbb66-gmhjg"] Dec 16 12:15:23 crc kubenswrapper[4805]: I1216 12:15:23.683033 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:15:24 crc kubenswrapper[4805]: I1216 12:15:24.414069 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.012599 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:25 crc kubenswrapper[4805]: E1216 12:15:25.026875 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 16 12:15:25 crc kubenswrapper[4805]: E1216 12:15:25.027091 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5t2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-l88fx_openstack(bedb416e-1423-4c43-8676-f6843c51c7b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:15:25 crc kubenswrapper[4805]: E1216 12:15:25.030417 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-l88fx" podUID="bedb416e-1423-4c43-8676-f6843c51c7b0" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.065364 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" event={"ID":"8c623e3f-b44e-4d37-add1-c5626879007d","Type":"ContainerDied","Data":"9cf3a7fd48a8df14ec76f7c438dfa5a56e81a9417ddefdf7158a41cf72999bf8"} Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.065818 4805 scope.go:117] "RemoveContainer" containerID="313a8690e51a1caf309acfd6185f038285126973c4f70c19e188476c4330f178" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.066169 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.086711 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84","Type":"ContainerStarted","Data":"7693463b0644dcd31cb2b4e28e532daba82b83faf494fba99a8ab276fe980993"} Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.094527 4805 generic.go:334] "Generic (PLEG): container finished" podID="287ae472-06a8-4cc7-8508-1227342e727a" containerID="1b205051b301473d43bda995381a061cd714e8ba20e2b4d2ed5487efd3a6c9ff" exitCode=0 Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.094606 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zmd7t" event={"ID":"287ae472-06a8-4cc7-8508-1227342e727a","Type":"ContainerDied","Data":"1b205051b301473d43bda995381a061cd714e8ba20e2b4d2ed5487efd3a6c9ff"} Dec 16 12:15:25 crc kubenswrapper[4805]: E1216 12:15:25.096911 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-l88fx" podUID="bedb416e-1423-4c43-8676-f6843c51c7b0" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.160032 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-svc\") pod \"8c623e3f-b44e-4d37-add1-c5626879007d\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.160185 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-config\") pod \"8c623e3f-b44e-4d37-add1-c5626879007d\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.160889 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-sb\") pod \"8c623e3f-b44e-4d37-add1-c5626879007d\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.160977 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-nb\") pod \"8c623e3f-b44e-4d37-add1-c5626879007d\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.161034 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-swift-storage-0\") pod \"8c623e3f-b44e-4d37-add1-c5626879007d\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.161071 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2q2p\" (UniqueName: \"kubernetes.io/projected/8c623e3f-b44e-4d37-add1-c5626879007d-kube-api-access-t2q2p\") pod \"8c623e3f-b44e-4d37-add1-c5626879007d\" (UID: \"8c623e3f-b44e-4d37-add1-c5626879007d\") " Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.182617 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c623e3f-b44e-4d37-add1-c5626879007d-kube-api-access-t2q2p" (OuterVolumeSpecName: "kube-api-access-t2q2p") pod "8c623e3f-b44e-4d37-add1-c5626879007d" (UID: "8c623e3f-b44e-4d37-add1-c5626879007d"). InnerVolumeSpecName "kube-api-access-t2q2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.214901 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c623e3f-b44e-4d37-add1-c5626879007d" (UID: "8c623e3f-b44e-4d37-add1-c5626879007d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.216174 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-config" (OuterVolumeSpecName: "config") pod "8c623e3f-b44e-4d37-add1-c5626879007d" (UID: "8c623e3f-b44e-4d37-add1-c5626879007d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.218941 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c623e3f-b44e-4d37-add1-c5626879007d" (UID: "8c623e3f-b44e-4d37-add1-c5626879007d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.233254 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c623e3f-b44e-4d37-add1-c5626879007d" (UID: "8c623e3f-b44e-4d37-add1-c5626879007d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.234640 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c623e3f-b44e-4d37-add1-c5626879007d" (UID: "8c623e3f-b44e-4d37-add1-c5626879007d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.264187 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.264874 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.264919 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.264932 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.264943 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c623e3f-b44e-4d37-add1-c5626879007d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.264959 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2q2p\" (UniqueName: \"kubernetes.io/projected/8c623e3f-b44e-4d37-add1-c5626879007d-kube-api-access-t2q2p\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.416610 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-287gx"] Dec 16 12:15:25 crc kubenswrapper[4805]: I1216 12:15:25.425461 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-287gx"] Dec 16 12:15:26 crc kubenswrapper[4805]: I1216 12:15:26.532755 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c623e3f-b44e-4d37-add1-c5626879007d" path="/var/lib/kubelet/pods/8c623e3f-b44e-4d37-add1-c5626879007d/volumes" Dec 16 12:15:27 crc kubenswrapper[4805]: I1216 12:15:27.071966 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:15:27 crc kubenswrapper[4805]: I1216 12:15:27.072031 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:15:27 crc kubenswrapper[4805]: I1216 12:15:27.072082 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:15:27 crc kubenswrapper[4805]: I1216 12:15:27.072844 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49407ccdd3d008f1744b80cf9c050b56468ce45acde47e6aab9b00525b75e878"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:15:27 crc kubenswrapper[4805]: I1216 12:15:27.072907 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://49407ccdd3d008f1744b80cf9c050b56468ce45acde47e6aab9b00525b75e878" gracePeriod=600 Dec 16 12:15:28 crc kubenswrapper[4805]: I1216 12:15:28.127955 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="49407ccdd3d008f1744b80cf9c050b56468ce45acde47e6aab9b00525b75e878" exitCode=0 Dec 16 12:15:28 crc kubenswrapper[4805]: I1216 12:15:28.128020 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"49407ccdd3d008f1744b80cf9c050b56468ce45acde47e6aab9b00525b75e878"} Dec 16 12:15:29 crc kubenswrapper[4805]: I1216 12:15:29.269023 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-287gx" podUID="8c623e3f-b44e-4d37-add1-c5626879007d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: i/o timeout" Dec 16 12:15:36 crc kubenswrapper[4805]: W1216 12:15:36.115885 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4af4b9e_77b2_4f27_8148_7000d60f2266.slice/crio-8b8e79257bb25d0d6d076c3a13f8be3e97f0bd5ec1493c3f679d9a8a3f8a6bbe WatchSource:0}: Error finding container 8b8e79257bb25d0d6d076c3a13f8be3e97f0bd5ec1493c3f679d9a8a3f8a6bbe: Status 404 returned error can't find the container with id 8b8e79257bb25d0d6d076c3a13f8be3e97f0bd5ec1493c3f679d9a8a3f8a6bbe Dec 16 12:15:36 crc kubenswrapper[4805]: I1216 12:15:36.213725 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69799999fb-rbm4h" event={"ID":"d4af4b9e-77b2-4f27-8148-7000d60f2266","Type":"ContainerStarted","Data":"8b8e79257bb25d0d6d076c3a13f8be3e97f0bd5ec1493c3f679d9a8a3f8a6bbe"} Dec 16 12:15:38 crc kubenswrapper[4805]: W1216 12:15:38.250684 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaba3492_18c7_49bd_90d6_7429d516cb3b.slice/crio-2210d33322d252f5814018092aaae2cff8cc77518cf57a6dc32caf9389926ac7 WatchSource:0}: Error finding container 2210d33322d252f5814018092aaae2cff8cc77518cf57a6dc32caf9389926ac7: Status 404 returned error can't find the container with id 2210d33322d252f5814018092aaae2cff8cc77518cf57a6dc32caf9389926ac7 Dec 16 12:15:39 crc kubenswrapper[4805]: I1216 12:15:39.247061 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" event={"ID":"aaba3492-18c7-49bd-90d6-7429d516cb3b","Type":"ContainerStarted","Data":"2210d33322d252f5814018092aaae2cff8cc77518cf57a6dc32caf9389926ac7"} Dec 16 12:15:43 crc kubenswrapper[4805]: E1216 12:15:43.963771 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 16 12:15:43 crc kubenswrapper[4805]: E1216 12:15:43.964495 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55h566h6dh587h56h5c9h65ch54dh595h544h666hd8h5d6h99hfh597h659h5d4h545h674hdbh57bh647h59bh588h666h5f5h698h557h66ch58dhfcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4vqrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f4f4b6f8f-m9ldv_openstack(4a288232-35f9-4e02-bc87-2c7af60a884b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:15:43 crc kubenswrapper[4805]: E1216 12:15:43.968536 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 16 12:15:43 crc kubenswrapper[4805]: E1216 12:15:43.968652 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6f4f4b6f8f-m9ldv" podUID="4a288232-35f9-4e02-bc87-2c7af60a884b" Dec 16 12:15:43 crc kubenswrapper[4805]: E1216 12:15:43.968824 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c5h67h77hcfh67h576hf7h699h57ch575h5fhf7h59bhbh5fdh586h86h57fh59ch59ch67fh566h5d7h5d8h7h57bh54dh5c7hfch5bdh74hdbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4spx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-654b85648c-wz8kq_openstack(7d088a1e-1b68-444c-9155-016204abfaf9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:15:43 crc kubenswrapper[4805]: E1216 12:15:43.973971 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-654b85648c-wz8kq" podUID="7d088a1e-1b68-444c-9155-016204abfaf9" Dec 16 12:15:43 crc kubenswrapper[4805]: E1216 12:15:43.978389 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 16 12:15:43 crc kubenswrapper[4805]: E1216 12:15:43.978637 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c9h6bhb5hc5h6h69h57dh9bh6bh59bh5c4h546h76h97h596h686h548hf6h69h555h594h8dhc4h5c7h5b8hffh5f5hb6h5f5h547hf7h57fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjfqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6698d6cf45-z7twf_openstack(e75272e8-6166-46a2-a220-dc6e64301e77): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:15:43 crc kubenswrapper[4805]: E1216 12:15:43.980960 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6698d6cf45-z7twf" podUID="e75272e8-6166-46a2-a220-dc6e64301e77" Dec 16 12:15:44 crc kubenswrapper[4805]: I1216 12:15:44.297191 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d8fcdbb66-gmhjg" event={"ID":"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c","Type":"ContainerStarted","Data":"7dec06c079bc1453f05ec66edf5d8526484e7eae1482b2a299e45c815fd2aba2"} Dec 16 12:15:46 crc kubenswrapper[4805]: E1216 12:15:46.078708 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Dec 16 12:15:46 crc kubenswrapper[4805]: E1216 12:15:46.079461 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kbkbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-2th2m_openstack(ed0d4cec-a7ad-4887-8dc8-e7da47a7c878): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:15:46 crc kubenswrapper[4805]: E1216 12:15:46.080930 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-2th2m" podUID="ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.157281 4805 scope.go:117] "RemoveContainer" containerID="406ab79cfcdcde8d447df52c3f2343f0ce1fa3fd9d6cddfc36a65eff44fc2a3f" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.309391 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.320282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f4f4b6f8f-m9ldv" event={"ID":"4a288232-35f9-4e02-bc87-2c7af60a884b","Type":"ContainerDied","Data":"b5401c5d02f54043ddce81329c1514a38d114e051be393857383825112247212"} Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.320323 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5401c5d02f54043ddce81329c1514a38d114e051be393857383825112247212" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.320880 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-fernet-keys\") pod \"287ae472-06a8-4cc7-8508-1227342e727a\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.321009 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzvcs\" (UniqueName: \"kubernetes.io/projected/287ae472-06a8-4cc7-8508-1227342e727a-kube-api-access-lzvcs\") pod \"287ae472-06a8-4cc7-8508-1227342e727a\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.321085 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-combined-ca-bundle\") pod \"287ae472-06a8-4cc7-8508-1227342e727a\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.321161 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-credential-keys\") pod \"287ae472-06a8-4cc7-8508-1227342e727a\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.321284 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-config-data\") pod \"287ae472-06a8-4cc7-8508-1227342e727a\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.321323 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-scripts\") pod \"287ae472-06a8-4cc7-8508-1227342e727a\" (UID: \"287ae472-06a8-4cc7-8508-1227342e727a\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.330673 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/287ae472-06a8-4cc7-8508-1227342e727a-kube-api-access-lzvcs" (OuterVolumeSpecName: "kube-api-access-lzvcs") pod "287ae472-06a8-4cc7-8508-1227342e727a" (UID: "287ae472-06a8-4cc7-8508-1227342e727a"). InnerVolumeSpecName "kube-api-access-lzvcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.335457 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.340411 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-scripts" (OuterVolumeSpecName: "scripts") pod "287ae472-06a8-4cc7-8508-1227342e727a" (UID: "287ae472-06a8-4cc7-8508-1227342e727a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.340538 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6698d6cf45-z7twf" event={"ID":"e75272e8-6166-46a2-a220-dc6e64301e77","Type":"ContainerDied","Data":"702ab0a087f5f970ed7849735a2561135df4bc0a7a16f78e03f92eb52a6ec8cb"} Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.340573 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="702ab0a087f5f970ed7849735a2561135df4bc0a7a16f78e03f92eb52a6ec8cb" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.342215 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-654b85648c-wz8kq" event={"ID":"7d088a1e-1b68-444c-9155-016204abfaf9","Type":"ContainerDied","Data":"6741f9cf52b7f1dd7d5fbd9f5c534bfad36d12120d8ae138bca2196bae69198c"} Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.342291 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-654b85648c-wz8kq" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.344490 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "287ae472-06a8-4cc7-8508-1227342e727a" (UID: "287ae472-06a8-4cc7-8508-1227342e727a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.344631 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "287ae472-06a8-4cc7-8508-1227342e727a" (UID: "287ae472-06a8-4cc7-8508-1227342e727a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.345019 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0124c5d-80e6-4e65-804a-c5a437cda935","Type":"ContainerStarted","Data":"303c531983ad55fb81ab088848b7e7fa63edc9979e2a6509c4d4320948b2079d"} Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.347578 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zmd7t" event={"ID":"287ae472-06a8-4cc7-8508-1227342e727a","Type":"ContainerDied","Data":"7d33e23f65b74f476bfac2a9fd8b25d840ba79b5ea8bb5f9613c558419677918"} Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.347614 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d33e23f65b74f476bfac2a9fd8b25d840ba79b5ea8bb5f9613c558419677918" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.347673 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zmd7t" Dec 16 12:15:46 crc kubenswrapper[4805]: E1216 12:15:46.356789 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-2th2m" podUID="ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.375967 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-config-data" (OuterVolumeSpecName: "config-data") pod "287ae472-06a8-4cc7-8508-1227342e727a" (UID: "287ae472-06a8-4cc7-8508-1227342e727a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.393568 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "287ae472-06a8-4cc7-8508-1227342e727a" (UID: "287ae472-06a8-4cc7-8508-1227342e727a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.414208 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.420446 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.426693 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.426722 4805 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.426739 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.426751 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.426761 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/287ae472-06a8-4cc7-8508-1227342e727a-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.426771 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzvcs\" (UniqueName: \"kubernetes.io/projected/287ae472-06a8-4cc7-8508-1227342e727a-kube-api-access-lzvcs\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.529343 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4spx\" (UniqueName: \"kubernetes.io/projected/7d088a1e-1b68-444c-9155-016204abfaf9-kube-api-access-g4spx\") pod \"7d088a1e-1b68-444c-9155-016204abfaf9\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.529527 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-scripts\") pod \"e75272e8-6166-46a2-a220-dc6e64301e77\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.529625 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d088a1e-1b68-444c-9155-016204abfaf9-logs\") pod \"7d088a1e-1b68-444c-9155-016204abfaf9\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.529702 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjfqx\" (UniqueName: \"kubernetes.io/projected/e75272e8-6166-46a2-a220-dc6e64301e77-kube-api-access-wjfqx\") pod \"e75272e8-6166-46a2-a220-dc6e64301e77\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.529742 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-scripts\") pod \"4a288232-35f9-4e02-bc87-2c7af60a884b\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.529769 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-scripts\") pod \"7d088a1e-1b68-444c-9155-016204abfaf9\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.529820 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-config-data\") pod \"e75272e8-6166-46a2-a220-dc6e64301e77\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.529914 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a288232-35f9-4e02-bc87-2c7af60a884b-logs\") pod \"4a288232-35f9-4e02-bc87-2c7af60a884b\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.530007 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-config-data\") pod \"7d088a1e-1b68-444c-9155-016204abfaf9\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.530082 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vqrh\" (UniqueName: \"kubernetes.io/projected/4a288232-35f9-4e02-bc87-2c7af60a884b-kube-api-access-4vqrh\") pod \"4a288232-35f9-4e02-bc87-2c7af60a884b\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.530154 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-scripts" (OuterVolumeSpecName: "scripts") pod "e75272e8-6166-46a2-a220-dc6e64301e77" (UID: "e75272e8-6166-46a2-a220-dc6e64301e77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.530235 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e75272e8-6166-46a2-a220-dc6e64301e77-logs\") pod \"e75272e8-6166-46a2-a220-dc6e64301e77\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.530313 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d088a1e-1b68-444c-9155-016204abfaf9-horizon-secret-key\") pod \"7d088a1e-1b68-444c-9155-016204abfaf9\" (UID: \"7d088a1e-1b68-444c-9155-016204abfaf9\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.530358 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e75272e8-6166-46a2-a220-dc6e64301e77-horizon-secret-key\") pod \"e75272e8-6166-46a2-a220-dc6e64301e77\" (UID: \"e75272e8-6166-46a2-a220-dc6e64301e77\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.530411 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4a288232-35f9-4e02-bc87-2c7af60a884b-horizon-secret-key\") pod \"4a288232-35f9-4e02-bc87-2c7af60a884b\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.530451 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-config-data\") pod \"4a288232-35f9-4e02-bc87-2c7af60a884b\" (UID: \"4a288232-35f9-4e02-bc87-2c7af60a884b\") " Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.530432 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-scripts" (OuterVolumeSpecName: "scripts") pod "4a288232-35f9-4e02-bc87-2c7af60a884b" (UID: "4a288232-35f9-4e02-bc87-2c7af60a884b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.530561 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d088a1e-1b68-444c-9155-016204abfaf9-logs" (OuterVolumeSpecName: "logs") pod "7d088a1e-1b68-444c-9155-016204abfaf9" (UID: "7d088a1e-1b68-444c-9155-016204abfaf9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.530975 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-scripts" (OuterVolumeSpecName: "scripts") pod "7d088a1e-1b68-444c-9155-016204abfaf9" (UID: "7d088a1e-1b68-444c-9155-016204abfaf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.531074 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-config-data" (OuterVolumeSpecName: "config-data") pod "e75272e8-6166-46a2-a220-dc6e64301e77" (UID: "e75272e8-6166-46a2-a220-dc6e64301e77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.531378 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d088a1e-1b68-444c-9155-016204abfaf9-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.531407 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.532081 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-config-data" (OuterVolumeSpecName: "config-data") pod "7d088a1e-1b68-444c-9155-016204abfaf9" (UID: "7d088a1e-1b68-444c-9155-016204abfaf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.531424 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.532526 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.532542 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e75272e8-6166-46a2-a220-dc6e64301e77-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.532916 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d088a1e-1b68-444c-9155-016204abfaf9-kube-api-access-g4spx" (OuterVolumeSpecName: "kube-api-access-g4spx") pod "7d088a1e-1b68-444c-9155-016204abfaf9" (UID: "7d088a1e-1b68-444c-9155-016204abfaf9"). InnerVolumeSpecName "kube-api-access-g4spx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.532975 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a288232-35f9-4e02-bc87-2c7af60a884b-logs" (OuterVolumeSpecName: "logs") pod "4a288232-35f9-4e02-bc87-2c7af60a884b" (UID: "4a288232-35f9-4e02-bc87-2c7af60a884b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.533736 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e75272e8-6166-46a2-a220-dc6e64301e77-logs" (OuterVolumeSpecName: "logs") pod "e75272e8-6166-46a2-a220-dc6e64301e77" (UID: "e75272e8-6166-46a2-a220-dc6e64301e77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.534829 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e75272e8-6166-46a2-a220-dc6e64301e77-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e75272e8-6166-46a2-a220-dc6e64301e77" (UID: "e75272e8-6166-46a2-a220-dc6e64301e77"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.534913 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d088a1e-1b68-444c-9155-016204abfaf9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7d088a1e-1b68-444c-9155-016204abfaf9" (UID: "7d088a1e-1b68-444c-9155-016204abfaf9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.535618 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75272e8-6166-46a2-a220-dc6e64301e77-kube-api-access-wjfqx" (OuterVolumeSpecName: "kube-api-access-wjfqx") pod "e75272e8-6166-46a2-a220-dc6e64301e77" (UID: "e75272e8-6166-46a2-a220-dc6e64301e77"). InnerVolumeSpecName "kube-api-access-wjfqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.535805 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-config-data" (OuterVolumeSpecName: "config-data") pod "4a288232-35f9-4e02-bc87-2c7af60a884b" (UID: "4a288232-35f9-4e02-bc87-2c7af60a884b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.542217 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a288232-35f9-4e02-bc87-2c7af60a884b-kube-api-access-4vqrh" (OuterVolumeSpecName: "kube-api-access-4vqrh") pod "4a288232-35f9-4e02-bc87-2c7af60a884b" (UID: "4a288232-35f9-4e02-bc87-2c7af60a884b"). InnerVolumeSpecName "kube-api-access-4vqrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.543645 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a288232-35f9-4e02-bc87-2c7af60a884b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4a288232-35f9-4e02-bc87-2c7af60a884b" (UID: "4a288232-35f9-4e02-bc87-2c7af60a884b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.634387 4805 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d088a1e-1b68-444c-9155-016204abfaf9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.634653 4805 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e75272e8-6166-46a2-a220-dc6e64301e77-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.634716 4805 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4a288232-35f9-4e02-bc87-2c7af60a884b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.634787 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a288232-35f9-4e02-bc87-2c7af60a884b-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.634847 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4spx\" (UniqueName: \"kubernetes.io/projected/7d088a1e-1b68-444c-9155-016204abfaf9-kube-api-access-g4spx\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.634921 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjfqx\" (UniqueName: \"kubernetes.io/projected/e75272e8-6166-46a2-a220-dc6e64301e77-kube-api-access-wjfqx\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.634976 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a288232-35f9-4e02-bc87-2c7af60a884b-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.635031 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d088a1e-1b68-444c-9155-016204abfaf9-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.635088 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vqrh\" (UniqueName: \"kubernetes.io/projected/4a288232-35f9-4e02-bc87-2c7af60a884b-kube-api-access-4vqrh\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.635168 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e75272e8-6166-46a2-a220-dc6e64301e77-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.710002 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-654b85648c-wz8kq"] Dec 16 12:15:46 crc kubenswrapper[4805]: I1216 12:15:46.717096 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-654b85648c-wz8kq"] Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.370457 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f4f4b6f8f-m9ldv" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.370590 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6698d6cf45-z7twf" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.447812 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zmd7t"] Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.468029 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zmd7t"] Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.476966 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6698d6cf45-z7twf"] Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.486092 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6698d6cf45-z7twf"] Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.522546 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f4f4b6f8f-m9ldv"] Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.534452 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f4f4b6f8f-m9ldv"] Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.545023 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kfrrc"] Dec 16 12:15:47 crc kubenswrapper[4805]: E1216 12:15:47.545609 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c623e3f-b44e-4d37-add1-c5626879007d" containerName="dnsmasq-dns" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.545632 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c623e3f-b44e-4d37-add1-c5626879007d" containerName="dnsmasq-dns" Dec 16 12:15:47 crc kubenswrapper[4805]: E1216 12:15:47.545650 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c623e3f-b44e-4d37-add1-c5626879007d" containerName="init" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.545659 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c623e3f-b44e-4d37-add1-c5626879007d" containerName="init" Dec 16 12:15:47 crc kubenswrapper[4805]: E1216 12:15:47.545670 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="287ae472-06a8-4cc7-8508-1227342e727a" containerName="keystone-bootstrap" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.545679 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="287ae472-06a8-4cc7-8508-1227342e727a" containerName="keystone-bootstrap" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.545941 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="287ae472-06a8-4cc7-8508-1227342e727a" containerName="keystone-bootstrap" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.545980 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c623e3f-b44e-4d37-add1-c5626879007d" containerName="dnsmasq-dns" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.549829 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.551982 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.553691 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-d6pz6" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.553949 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.553957 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.556385 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kfrrc"] Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.649784 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-config-data\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.649902 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-scripts\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.649954 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-credential-keys\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.650039 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sb5s\" (UniqueName: \"kubernetes.io/projected/51afacd6-c090-45db-aa5c-da8b53734401-kube-api-access-7sb5s\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.650305 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-fernet-keys\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.650366 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-combined-ca-bundle\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.751221 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-config-data\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.751297 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-scripts\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.751322 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-credential-keys\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.751373 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sb5s\" (UniqueName: \"kubernetes.io/projected/51afacd6-c090-45db-aa5c-da8b53734401-kube-api-access-7sb5s\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.751427 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-fernet-keys\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.751449 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-combined-ca-bundle\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.760286 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-scripts\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.762317 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-combined-ca-bundle\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.762423 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-fernet-keys\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.763870 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-config-data\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.765948 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-credential-keys\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.783684 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sb5s\" (UniqueName: \"kubernetes.io/projected/51afacd6-c090-45db-aa5c-da8b53734401-kube-api-access-7sb5s\") pod \"keystone-bootstrap-kfrrc\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:47 crc kubenswrapper[4805]: E1216 12:15:47.785209 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 16 12:15:47 crc kubenswrapper[4805]: E1216 12:15:47.785419 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2x8xx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9v2nl_openstack(aa98eb47-6335-4544-b5d6-718b55075000): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:15:47 crc kubenswrapper[4805]: E1216 12:15:47.786666 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9v2nl" podUID="aa98eb47-6335-4544-b5d6-718b55075000" Dec 16 12:15:47 crc kubenswrapper[4805]: I1216 12:15:47.867861 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:15:48 crc kubenswrapper[4805]: E1216 12:15:48.383224 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-9v2nl" podUID="aa98eb47-6335-4544-b5d6-718b55075000" Dec 16 12:15:48 crc kubenswrapper[4805]: I1216 12:15:48.541330 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="287ae472-06a8-4cc7-8508-1227342e727a" path="/var/lib/kubelet/pods/287ae472-06a8-4cc7-8508-1227342e727a/volumes" Dec 16 12:15:48 crc kubenswrapper[4805]: I1216 12:15:48.543575 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a288232-35f9-4e02-bc87-2c7af60a884b" path="/var/lib/kubelet/pods/4a288232-35f9-4e02-bc87-2c7af60a884b/volumes" Dec 16 12:15:48 crc kubenswrapper[4805]: I1216 12:15:48.544063 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d088a1e-1b68-444c-9155-016204abfaf9" path="/var/lib/kubelet/pods/7d088a1e-1b68-444c-9155-016204abfaf9/volumes" Dec 16 12:15:48 crc kubenswrapper[4805]: I1216 12:15:48.545916 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75272e8-6166-46a2-a220-dc6e64301e77" path="/var/lib/kubelet/pods/e75272e8-6166-46a2-a220-dc6e64301e77/volumes" Dec 16 12:15:49 crc kubenswrapper[4805]: I1216 12:15:49.867944 4805 scope.go:117] "RemoveContainer" containerID="d3092c2527ab0dcd7a9a9a61613b2265defab594c8cd9fbda9395f115e5c0fc6" Dec 16 12:15:49 crc kubenswrapper[4805]: E1216 12:15:49.873872 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 16 12:15:49 crc kubenswrapper[4805]: E1216 12:15:49.874111 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59ch696h679hf8h64ch8h5bbh67fh58h5dh5f5h5c9h78h68fh594h548h576h689h54fh5c4h554hb7h676h597h545h9dh65ch77h5d4h5b7h675hb4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pscpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a26a23d3-144a-4e2d-8ce2-63a63da575ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:15:50 crc kubenswrapper[4805]: I1216 12:15:50.478823 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"52cf8f6f2f746633bfd1f446a1bda7ea7f3c784cb6d567e07a1cc669ed487201"} Dec 16 12:15:50 crc kubenswrapper[4805]: I1216 12:15:50.488067 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" event={"ID":"aaba3492-18c7-49bd-90d6-7429d516cb3b","Type":"ContainerStarted","Data":"fed3d206b56b67c98b096c52a182eb1def5dd61af473ba040f1ad1ee5e20d224"} Dec 16 12:15:50 crc kubenswrapper[4805]: I1216 12:15:50.624975 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kfrrc"] Dec 16 12:15:50 crc kubenswrapper[4805]: W1216 12:15:50.640555 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51afacd6_c090_45db_aa5c_da8b53734401.slice/crio-cb2a7e2acaf5671c0440b128cc5068e5b2608788aeef2d062844da271669a265 WatchSource:0}: Error finding container cb2a7e2acaf5671c0440b128cc5068e5b2608788aeef2d062844da271669a265: Status 404 returned error can't find the container with id cb2a7e2acaf5671c0440b128cc5068e5b2608788aeef2d062844da271669a265 Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.498523 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84","Type":"ContainerStarted","Data":"331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f"} Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.499732 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0124c5d-80e6-4e65-804a-c5a437cda935","Type":"ContainerStarted","Data":"ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890"} Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.501222 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l88fx" event={"ID":"bedb416e-1423-4c43-8676-f6843c51c7b0","Type":"ContainerStarted","Data":"c098b155aef353dd865ed392e4d7db83c6b4cf9da55f74658a418b5c260f9c3c"} Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.502804 4805 generic.go:334] "Generic (PLEG): container finished" podID="aaba3492-18c7-49bd-90d6-7429d516cb3b" containerID="fed3d206b56b67c98b096c52a182eb1def5dd61af473ba040f1ad1ee5e20d224" exitCode=0 Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.502862 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" event={"ID":"aaba3492-18c7-49bd-90d6-7429d516cb3b","Type":"ContainerDied","Data":"fed3d206b56b67c98b096c52a182eb1def5dd61af473ba040f1ad1ee5e20d224"} Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.502980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" event={"ID":"aaba3492-18c7-49bd-90d6-7429d516cb3b","Type":"ContainerStarted","Data":"ff9e1892082140c7071f1206934c7e81034c5b199c02d522a48a93d561f0d700"} Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.503374 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.505414 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69799999fb-rbm4h" event={"ID":"d4af4b9e-77b2-4f27-8148-7000d60f2266","Type":"ContainerStarted","Data":"dd2281cf1874fb43ac82e70dd500cb318e02e272b5ea53465e648ae1f04ec142"} Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.505491 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69799999fb-rbm4h" event={"ID":"d4af4b9e-77b2-4f27-8148-7000d60f2266","Type":"ContainerStarted","Data":"0bf927b5508d4245eeeeea4173aef728309b3eea1085dfbfc6be4c3f36ad301d"} Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.507783 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d8fcdbb66-gmhjg" event={"ID":"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c","Type":"ContainerStarted","Data":"0d12cc96b864154bb1e684e6dd7cc9990d58ab7a48b62303008832bebe1d0a2b"} Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.507819 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d8fcdbb66-gmhjg" event={"ID":"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c","Type":"ContainerStarted","Data":"985f88c1f9a128dbb9319d7a54a3e3a7b4e72ec8f5b8e705eb481b82a7cd4102"} Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.510194 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kfrrc" event={"ID":"51afacd6-c090-45db-aa5c-da8b53734401","Type":"ContainerStarted","Data":"350c78c6c63507f88127a2e1d845e0aac634a99db3a3be94e1afe28884c84c59"} Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.510243 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kfrrc" event={"ID":"51afacd6-c090-45db-aa5c-da8b53734401","Type":"ContainerStarted","Data":"cb2a7e2acaf5671c0440b128cc5068e5b2608788aeef2d062844da271669a265"} Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.519033 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-l88fx" podStartSLOduration=3.285723255 podStartE2EDuration="43.519010302s" podCreationTimestamp="2025-12-16 12:15:08 +0000 UTC" firstStartedPulling="2025-12-16 12:15:09.915505259 +0000 UTC m=+1183.633763064" lastFinishedPulling="2025-12-16 12:15:50.148792306 +0000 UTC m=+1223.867050111" observedRunningTime="2025-12-16 12:15:51.518629611 +0000 UTC m=+1225.236887426" watchObservedRunningTime="2025-12-16 12:15:51.519010302 +0000 UTC m=+1225.237268127" Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.543120 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kfrrc" podStartSLOduration=4.543101003 podStartE2EDuration="4.543101003s" podCreationTimestamp="2025-12-16 12:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:51.536125383 +0000 UTC m=+1225.254383208" watchObservedRunningTime="2025-12-16 12:15:51.543101003 +0000 UTC m=+1225.261358828" Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.557211 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6d8fcdbb66-gmhjg" podStartSLOduration=29.417497378 podStartE2EDuration="35.557186557s" podCreationTimestamp="2025-12-16 12:15:16 +0000 UTC" firstStartedPulling="2025-12-16 12:15:43.962940323 +0000 UTC m=+1217.681198138" lastFinishedPulling="2025-12-16 12:15:50.102629512 +0000 UTC m=+1223.820887317" observedRunningTime="2025-12-16 12:15:51.554193621 +0000 UTC m=+1225.272451446" watchObservedRunningTime="2025-12-16 12:15:51.557186557 +0000 UTC m=+1225.275444372" Dec 16 12:15:51 crc kubenswrapper[4805]: I1216 12:15:51.584954 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" podStartSLOduration=34.584930063 podStartE2EDuration="34.584930063s" podCreationTimestamp="2025-12-16 12:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:51.58238703 +0000 UTC m=+1225.300644835" watchObservedRunningTime="2025-12-16 12:15:51.584930063 +0000 UTC m=+1225.303187888" Dec 16 12:15:52 crc kubenswrapper[4805]: I1216 12:15:52.519956 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a26a23d3-144a-4e2d-8ce2-63a63da575ff","Type":"ContainerStarted","Data":"5a4226d7d5c10ea6eabbcc1f9aeff4c24f8c6544c52123b8d1077e134a6f849a"} Dec 16 12:15:52 crc kubenswrapper[4805]: I1216 12:15:52.532557 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" containerName="glance-log" containerID="cri-o://331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f" gracePeriod=30 Dec 16 12:15:52 crc kubenswrapper[4805]: I1216 12:15:52.532795 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" containerName="glance-httpd" containerID="cri-o://98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e" gracePeriod=30 Dec 16 12:15:52 crc kubenswrapper[4805]: I1216 12:15:52.534768 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84","Type":"ContainerStarted","Data":"98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e"} Dec 16 12:15:52 crc kubenswrapper[4805]: I1216 12:15:52.540683 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0124c5d-80e6-4e65-804a-c5a437cda935","Type":"ContainerStarted","Data":"dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2"} Dec 16 12:15:52 crc kubenswrapper[4805]: I1216 12:15:52.540925 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c0124c5d-80e6-4e65-804a-c5a437cda935" containerName="glance-log" containerID="cri-o://ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890" gracePeriod=30 Dec 16 12:15:52 crc kubenswrapper[4805]: I1216 12:15:52.540973 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c0124c5d-80e6-4e65-804a-c5a437cda935" containerName="glance-httpd" containerID="cri-o://dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2" gracePeriod=30 Dec 16 12:15:52 crc kubenswrapper[4805]: I1216 12:15:52.578737 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=35.57872156 podStartE2EDuration="35.57872156s" podCreationTimestamp="2025-12-16 12:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:52.575667713 +0000 UTC m=+1226.293925528" watchObservedRunningTime="2025-12-16 12:15:52.57872156 +0000 UTC m=+1226.296979375" Dec 16 12:15:52 crc kubenswrapper[4805]: I1216 12:15:52.598211 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69799999fb-rbm4h" podStartSLOduration=22.57197902 podStartE2EDuration="36.598195668s" podCreationTimestamp="2025-12-16 12:15:16 +0000 UTC" firstStartedPulling="2025-12-16 12:15:36.122749033 +0000 UTC m=+1209.841006828" lastFinishedPulling="2025-12-16 12:15:50.148965671 +0000 UTC m=+1223.867223476" observedRunningTime="2025-12-16 12:15:52.595075479 +0000 UTC m=+1226.313333274" watchObservedRunningTime="2025-12-16 12:15:52.598195668 +0000 UTC m=+1226.316453483" Dec 16 12:15:52 crc kubenswrapper[4805]: I1216 12:15:52.638665 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=35.638646818 podStartE2EDuration="35.638646818s" podCreationTimestamp="2025-12-16 12:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:15:52.621806295 +0000 UTC m=+1226.340064100" watchObservedRunningTime="2025-12-16 12:15:52.638646818 +0000 UTC m=+1226.356904633" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.296768 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.429968 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509372 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-logs\") pod \"c0124c5d-80e6-4e65-804a-c5a437cda935\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509463 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfcdc\" (UniqueName: \"kubernetes.io/projected/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-kube-api-access-mfcdc\") pod \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509505 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-combined-ca-bundle\") pod \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509545 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509597 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-scripts\") pod \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509623 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-config-data\") pod \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509667 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-httpd-run\") pod \"c0124c5d-80e6-4e65-804a-c5a437cda935\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509749 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfp6r\" (UniqueName: \"kubernetes.io/projected/c0124c5d-80e6-4e65-804a-c5a437cda935-kube-api-access-tfp6r\") pod \"c0124c5d-80e6-4e65-804a-c5a437cda935\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509779 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-httpd-run\") pod \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509845 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"c0124c5d-80e6-4e65-804a-c5a437cda935\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509895 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-combined-ca-bundle\") pod \"c0124c5d-80e6-4e65-804a-c5a437cda935\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509948 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-config-data\") pod \"c0124c5d-80e6-4e65-804a-c5a437cda935\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509984 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-scripts\") pod \"c0124c5d-80e6-4e65-804a-c5a437cda935\" (UID: \"c0124c5d-80e6-4e65-804a-c5a437cda935\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.509987 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-logs" (OuterVolumeSpecName: "logs") pod "c0124c5d-80e6-4e65-804a-c5a437cda935" (UID: "c0124c5d-80e6-4e65-804a-c5a437cda935"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.510009 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-logs\") pod \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\" (UID: \"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84\") " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.510494 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.510707 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-logs" (OuterVolumeSpecName: "logs") pod "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" (UID: "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.518611 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" (UID: "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.529464 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-scripts" (OuterVolumeSpecName: "scripts") pod "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" (UID: "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.529759 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-kube-api-access-mfcdc" (OuterVolumeSpecName: "kube-api-access-mfcdc") pod "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" (UID: "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84"). InnerVolumeSpecName "kube-api-access-mfcdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.530231 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c0124c5d-80e6-4e65-804a-c5a437cda935" (UID: "c0124c5d-80e6-4e65-804a-c5a437cda935"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.530603 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "c0124c5d-80e6-4e65-804a-c5a437cda935" (UID: "c0124c5d-80e6-4e65-804a-c5a437cda935"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.531618 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-scripts" (OuterVolumeSpecName: "scripts") pod "c0124c5d-80e6-4e65-804a-c5a437cda935" (UID: "c0124c5d-80e6-4e65-804a-c5a437cda935"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.533827 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" (UID: "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.535692 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0124c5d-80e6-4e65-804a-c5a437cda935-kube-api-access-tfp6r" (OuterVolumeSpecName: "kube-api-access-tfp6r") pod "c0124c5d-80e6-4e65-804a-c5a437cda935" (UID: "c0124c5d-80e6-4e65-804a-c5a437cda935"). InnerVolumeSpecName "kube-api-access-tfp6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.563369 4805 generic.go:334] "Generic (PLEG): container finished" podID="50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" containerID="98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e" exitCode=0 Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.563439 4805 generic.go:334] "Generic (PLEG): container finished" podID="50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" containerID="331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f" exitCode=143 Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.563493 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84","Type":"ContainerDied","Data":"98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e"} Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.563528 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84","Type":"ContainerDied","Data":"331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f"} Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.563541 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84","Type":"ContainerDied","Data":"7693463b0644dcd31cb2b4e28e532daba82b83faf494fba99a8ab276fe980993"} Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.563560 4805 scope.go:117] "RemoveContainer" containerID="98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.563831 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.565198 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0124c5d-80e6-4e65-804a-c5a437cda935" (UID: "c0124c5d-80e6-4e65-804a-c5a437cda935"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.568300 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" (UID: "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.573992 4805 generic.go:334] "Generic (PLEG): container finished" podID="c0124c5d-80e6-4e65-804a-c5a437cda935" containerID="dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2" exitCode=0 Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.574026 4805 generic.go:334] "Generic (PLEG): container finished" podID="c0124c5d-80e6-4e65-804a-c5a437cda935" containerID="ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890" exitCode=143 Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.574164 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0124c5d-80e6-4e65-804a-c5a437cda935","Type":"ContainerDied","Data":"dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2"} Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.574203 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.574207 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0124c5d-80e6-4e65-804a-c5a437cda935","Type":"ContainerDied","Data":"ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890"} Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.574327 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0124c5d-80e6-4e65-804a-c5a437cda935","Type":"ContainerDied","Data":"303c531983ad55fb81ab088848b7e7fa63edc9979e2a6509c4d4320948b2079d"} Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.595287 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-config-data" (OuterVolumeSpecName: "config-data") pod "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" (UID: "50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.598556 4805 scope.go:117] "RemoveContainer" containerID="331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612462 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612501 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612512 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612520 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612529 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfcdc\" (UniqueName: \"kubernetes.io/projected/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-kube-api-access-mfcdc\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612539 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612556 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612564 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612601 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612621 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0124c5d-80e6-4e65-804a-c5a437cda935-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612632 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.612641 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfp6r\" (UniqueName: \"kubernetes.io/projected/c0124c5d-80e6-4e65-804a-c5a437cda935-kube-api-access-tfp6r\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.624225 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-config-data" (OuterVolumeSpecName: "config-data") pod "c0124c5d-80e6-4e65-804a-c5a437cda935" (UID: "c0124c5d-80e6-4e65-804a-c5a437cda935"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.636377 4805 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.637641 4805 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.637656 4805 scope.go:117] "RemoveContainer" containerID="98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e" Dec 16 12:15:53 crc kubenswrapper[4805]: E1216 12:15:53.645732 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e\": container with ID starting with 98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e not found: ID does not exist" containerID="98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.645798 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e"} err="failed to get container status \"98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e\": rpc error: code = NotFound desc = could not find container \"98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e\": container with ID starting with 98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e not found: ID does not exist" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.645831 4805 scope.go:117] "RemoveContainer" containerID="331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f" Dec 16 12:15:53 crc kubenswrapper[4805]: E1216 12:15:53.647051 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f\": container with ID starting with 331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f not found: ID does not exist" containerID="331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.647100 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f"} err="failed to get container status \"331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f\": rpc error: code = NotFound desc = could not find container \"331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f\": container with ID starting with 331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f not found: ID does not exist" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.647129 4805 scope.go:117] "RemoveContainer" containerID="98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.647509 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e"} err="failed to get container status \"98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e\": rpc error: code = NotFound desc = could not find container \"98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e\": container with ID starting with 98443db806b4ad8848ec439d2e22970f5fd6b3126dcf607e6b3a7c8b7c7a586e not found: ID does not exist" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.647540 4805 scope.go:117] "RemoveContainer" containerID="331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.647915 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f"} err="failed to get container status \"331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f\": rpc error: code = NotFound desc = could not find container \"331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f\": container with ID starting with 331171c6329f3fe3fa4a645f37d3755c1799253e1a80a9be0a13defc2d7cda8f not found: ID does not exist" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.647950 4805 scope.go:117] "RemoveContainer" containerID="dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.680723 4805 scope.go:117] "RemoveContainer" containerID="ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.716090 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0124c5d-80e6-4e65-804a-c5a437cda935-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.716156 4805 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.716169 4805 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.717995 4805 scope.go:117] "RemoveContainer" containerID="dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2" Dec 16 12:15:53 crc kubenswrapper[4805]: E1216 12:15:53.719868 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2\": container with ID starting with dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2 not found: ID does not exist" containerID="dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.719973 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2"} err="failed to get container status \"dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2\": rpc error: code = NotFound desc = could not find container \"dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2\": container with ID starting with dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2 not found: ID does not exist" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.720039 4805 scope.go:117] "RemoveContainer" containerID="ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890" Dec 16 12:15:53 crc kubenswrapper[4805]: E1216 12:15:53.726001 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890\": container with ID starting with ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890 not found: ID does not exist" containerID="ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.726076 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890"} err="failed to get container status \"ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890\": rpc error: code = NotFound desc = could not find container \"ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890\": container with ID starting with ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890 not found: ID does not exist" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.726184 4805 scope.go:117] "RemoveContainer" containerID="dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.726964 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2"} err="failed to get container status \"dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2\": rpc error: code = NotFound desc = could not find container \"dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2\": container with ID starting with dfdec954926b1681ceb5081ac9a70ea563dc7e2b7f8d4a396218b0bcabee49b2 not found: ID does not exist" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.727037 4805 scope.go:117] "RemoveContainer" containerID="ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.727423 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890"} err="failed to get container status \"ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890\": rpc error: code = NotFound desc = could not find container \"ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890\": container with ID starting with ae8759b16d74876562dfa91d1c790221be7333fc8fee9a3818665eaf6efc8890 not found: ID does not exist" Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.957911 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.975025 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:15:53 crc kubenswrapper[4805]: I1216 12:15:53.989761 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.015124 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.029340 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:15:54 crc kubenswrapper[4805]: E1216 12:15:54.029868 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" containerName="glance-log" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.029893 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" containerName="glance-log" Dec 16 12:15:54 crc kubenswrapper[4805]: E1216 12:15:54.029903 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" containerName="glance-httpd" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.029911 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" containerName="glance-httpd" Dec 16 12:15:54 crc kubenswrapper[4805]: E1216 12:15:54.029938 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0124c5d-80e6-4e65-804a-c5a437cda935" containerName="glance-log" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.029946 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0124c5d-80e6-4e65-804a-c5a437cda935" containerName="glance-log" Dec 16 12:15:54 crc kubenswrapper[4805]: E1216 12:15:54.029961 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0124c5d-80e6-4e65-804a-c5a437cda935" containerName="glance-httpd" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.029968 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0124c5d-80e6-4e65-804a-c5a437cda935" containerName="glance-httpd" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.030242 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0124c5d-80e6-4e65-804a-c5a437cda935" containerName="glance-httpd" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.030261 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" containerName="glance-log" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.030278 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0124c5d-80e6-4e65-804a-c5a437cda935" containerName="glance-log" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.030287 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" containerName="glance-httpd" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.031446 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.036373 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-r9rpt" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.036853 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.037030 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.037238 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.043409 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.086941 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.088712 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.098016 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.099748 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.105492 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.173753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.173930 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.174048 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-logs\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.174318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.174422 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-config-data\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.174537 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbpr7\" (UniqueName: \"kubernetes.io/projected/a0bdd7ef-5424-4628-b866-0aa42d44f257-kube-api-access-vbpr7\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.174579 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.174648 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-scripts\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.276472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.276588 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.276633 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.276683 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-logs\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.276710 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.276742 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.276751 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.276766 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.277309 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.277355 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-config-data\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.277417 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6w9f\" (UniqueName: \"kubernetes.io/projected/41274a07-eb55-48ed-9ebd-35edbc26f4f4-kube-api-access-r6w9f\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.277442 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.277507 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbpr7\" (UniqueName: \"kubernetes.io/projected/a0bdd7ef-5424-4628-b866-0aa42d44f257-kube-api-access-vbpr7\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.277557 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-logs\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.277551 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-logs\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.277617 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.277646 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.277718 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-scripts\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.278483 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.281878 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.283262 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.286827 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-config-data\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.306315 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-scripts\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.306444 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbpr7\" (UniqueName: \"kubernetes.io/projected/a0bdd7ef-5424-4628-b866-0aa42d44f257-kube-api-access-vbpr7\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.320239 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.379857 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.379958 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.380021 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.380054 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.380115 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6w9f\" (UniqueName: \"kubernetes.io/projected/41274a07-eb55-48ed-9ebd-35edbc26f4f4-kube-api-access-r6w9f\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.380157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.380189 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-logs\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.380210 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.380897 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.381499 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.385385 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.391946 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.391957 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.392033 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.405553 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-logs\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.411752 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6w9f\" (UniqueName: \"kubernetes.io/projected/41274a07-eb55-48ed-9ebd-35edbc26f4f4-kube-api-access-r6w9f\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.424573 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.481193 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.546463 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84" path="/var/lib/kubelet/pods/50ecde6a-1b42-4c96-a34e-b7ca2d4d7c84/volumes" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.549203 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0124c5d-80e6-4e65-804a-c5a437cda935" path="/var/lib/kubelet/pods/c0124c5d-80e6-4e65-804a-c5a437cda935/volumes" Dec 16 12:15:54 crc kubenswrapper[4805]: I1216 12:15:54.743955 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 16 12:15:55 crc kubenswrapper[4805]: I1216 12:15:55.583636 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:15:57 crc kubenswrapper[4805]: I1216 12:15:57.060262 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:57 crc kubenswrapper[4805]: I1216 12:15:57.060584 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:15:57 crc kubenswrapper[4805]: I1216 12:15:57.167741 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:57 crc kubenswrapper[4805]: I1216 12:15:57.168005 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:15:58 crc kubenswrapper[4805]: I1216 12:15:58.048611 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:15:58 crc kubenswrapper[4805]: I1216 12:15:58.124733 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-zbc97"] Dec 16 12:15:58 crc kubenswrapper[4805]: I1216 12:15:58.125021 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" podUID="a28faa94-62ae-49fa-8f01-8dff4989044e" containerName="dnsmasq-dns" containerID="cri-o://4d198bf75617b597aedd40c47c2c65654c4d7130efa0a286b52ecdf95bba44d6" gracePeriod=10 Dec 16 12:15:59 crc kubenswrapper[4805]: I1216 12:15:59.636108 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0bdd7ef-5424-4628-b866-0aa42d44f257","Type":"ContainerStarted","Data":"77dd06c4891fa1b9c374c8b1db0206e46f808de9c32691bd27ee06907543f093"} Dec 16 12:15:59 crc kubenswrapper[4805]: I1216 12:15:59.639644 4805 generic.go:334] "Generic (PLEG): container finished" podID="a28faa94-62ae-49fa-8f01-8dff4989044e" containerID="4d198bf75617b597aedd40c47c2c65654c4d7130efa0a286b52ecdf95bba44d6" exitCode=0 Dec 16 12:15:59 crc kubenswrapper[4805]: I1216 12:15:59.639695 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" event={"ID":"a28faa94-62ae-49fa-8f01-8dff4989044e","Type":"ContainerDied","Data":"4d198bf75617b597aedd40c47c2c65654c4d7130efa0a286b52ecdf95bba44d6"} Dec 16 12:16:01 crc kubenswrapper[4805]: I1216 12:16:01.868944 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" podUID="a28faa94-62ae-49fa-8f01-8dff4989044e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Dec 16 12:16:02 crc kubenswrapper[4805]: I1216 12:16:02.671524 4805 generic.go:334] "Generic (PLEG): container finished" podID="51afacd6-c090-45db-aa5c-da8b53734401" containerID="350c78c6c63507f88127a2e1d845e0aac634a99db3a3be94e1afe28884c84c59" exitCode=0 Dec 16 12:16:02 crc kubenswrapper[4805]: I1216 12:16:02.671581 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kfrrc" event={"ID":"51afacd6-c090-45db-aa5c-da8b53734401","Type":"ContainerDied","Data":"350c78c6c63507f88127a2e1d845e0aac634a99db3a3be94e1afe28884c84c59"} Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.217411 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.369450 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-nb\") pod \"a28faa94-62ae-49fa-8f01-8dff4989044e\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.369547 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpp46\" (UniqueName: \"kubernetes.io/projected/a28faa94-62ae-49fa-8f01-8dff4989044e-kube-api-access-kpp46\") pod \"a28faa94-62ae-49fa-8f01-8dff4989044e\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.369635 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-svc\") pod \"a28faa94-62ae-49fa-8f01-8dff4989044e\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.369676 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-config\") pod \"a28faa94-62ae-49fa-8f01-8dff4989044e\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.369736 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-sb\") pod \"a28faa94-62ae-49fa-8f01-8dff4989044e\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.369765 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-swift-storage-0\") pod \"a28faa94-62ae-49fa-8f01-8dff4989044e\" (UID: \"a28faa94-62ae-49fa-8f01-8dff4989044e\") " Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.391282 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a28faa94-62ae-49fa-8f01-8dff4989044e-kube-api-access-kpp46" (OuterVolumeSpecName: "kube-api-access-kpp46") pod "a28faa94-62ae-49fa-8f01-8dff4989044e" (UID: "a28faa94-62ae-49fa-8f01-8dff4989044e"). InnerVolumeSpecName "kube-api-access-kpp46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.425168 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a28faa94-62ae-49fa-8f01-8dff4989044e" (UID: "a28faa94-62ae-49fa-8f01-8dff4989044e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.433270 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a28faa94-62ae-49fa-8f01-8dff4989044e" (UID: "a28faa94-62ae-49fa-8f01-8dff4989044e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.446854 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a28faa94-62ae-49fa-8f01-8dff4989044e" (UID: "a28faa94-62ae-49fa-8f01-8dff4989044e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.467701 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a28faa94-62ae-49fa-8f01-8dff4989044e" (UID: "a28faa94-62ae-49fa-8f01-8dff4989044e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.467862 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-config" (OuterVolumeSpecName: "config") pod "a28faa94-62ae-49fa-8f01-8dff4989044e" (UID: "a28faa94-62ae-49fa-8f01-8dff4989044e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.474229 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.474251 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.474264 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.474275 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.474284 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a28faa94-62ae-49fa-8f01-8dff4989044e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.474291 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpp46\" (UniqueName: \"kubernetes.io/projected/a28faa94-62ae-49fa-8f01-8dff4989044e-kube-api-access-kpp46\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.592208 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.687395 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" event={"ID":"a28faa94-62ae-49fa-8f01-8dff4989044e","Type":"ContainerDied","Data":"2f16371396ee923d681cc6df9802c47effb61ec9f24d9c724b679eac2e967e06"} Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.687654 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-zbc97" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.687682 4805 scope.go:117] "RemoveContainer" containerID="4d198bf75617b597aedd40c47c2c65654c4d7130efa0a286b52ecdf95bba44d6" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.690782 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41274a07-eb55-48ed-9ebd-35edbc26f4f4","Type":"ContainerStarted","Data":"467e7b21fe9aeac095d16e0da0c975251142a88d4b7effbb05b3fd402a21c45f"} Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.725911 4805 scope.go:117] "RemoveContainer" containerID="e7fdf8ca296f45dffdc22499e1d25e253d30eb502ca17bedc3e17d3b7f8a1dfa" Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.743358 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-zbc97"] Dec 16 12:16:03 crc kubenswrapper[4805]: I1216 12:16:03.756067 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-zbc97"] Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.456122 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.585407 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a28faa94-62ae-49fa-8f01-8dff4989044e" path="/var/lib/kubelet/pods/a28faa94-62ae-49fa-8f01-8dff4989044e/volumes" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.596114 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-fernet-keys\") pod \"51afacd6-c090-45db-aa5c-da8b53734401\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.596492 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sb5s\" (UniqueName: \"kubernetes.io/projected/51afacd6-c090-45db-aa5c-da8b53734401-kube-api-access-7sb5s\") pod \"51afacd6-c090-45db-aa5c-da8b53734401\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.596648 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-combined-ca-bundle\") pod \"51afacd6-c090-45db-aa5c-da8b53734401\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.596747 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-scripts\") pod \"51afacd6-c090-45db-aa5c-da8b53734401\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.596899 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-credential-keys\") pod \"51afacd6-c090-45db-aa5c-da8b53734401\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.597160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-config-data\") pod \"51afacd6-c090-45db-aa5c-da8b53734401\" (UID: \"51afacd6-c090-45db-aa5c-da8b53734401\") " Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.603496 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "51afacd6-c090-45db-aa5c-da8b53734401" (UID: "51afacd6-c090-45db-aa5c-da8b53734401"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.604379 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51afacd6-c090-45db-aa5c-da8b53734401-kube-api-access-7sb5s" (OuterVolumeSpecName: "kube-api-access-7sb5s") pod "51afacd6-c090-45db-aa5c-da8b53734401" (UID: "51afacd6-c090-45db-aa5c-da8b53734401"). InnerVolumeSpecName "kube-api-access-7sb5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.604497 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-scripts" (OuterVolumeSpecName: "scripts") pod "51afacd6-c090-45db-aa5c-da8b53734401" (UID: "51afacd6-c090-45db-aa5c-da8b53734401"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.608491 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "51afacd6-c090-45db-aa5c-da8b53734401" (UID: "51afacd6-c090-45db-aa5c-da8b53734401"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.632334 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-config-data" (OuterVolumeSpecName: "config-data") pod "51afacd6-c090-45db-aa5c-da8b53734401" (UID: "51afacd6-c090-45db-aa5c-da8b53734401"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.643761 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51afacd6-c090-45db-aa5c-da8b53734401" (UID: "51afacd6-c090-45db-aa5c-da8b53734401"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.698639 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.698677 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sb5s\" (UniqueName: \"kubernetes.io/projected/51afacd6-c090-45db-aa5c-da8b53734401-kube-api-access-7sb5s\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.698689 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.698697 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.698705 4805 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.698714 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51afacd6-c090-45db-aa5c-da8b53734401-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.760081 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a26a23d3-144a-4e2d-8ce2-63a63da575ff","Type":"ContainerStarted","Data":"7b6c185bf76cea5cf507cad0bd80bb4004ac11a3da0d298a3b60e93d0c6136d5"} Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.767702 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2th2m" event={"ID":"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878","Type":"ContainerStarted","Data":"62eb1b9103f333ef9de12ee3767db5b2504d85daf1b363217f7912283f710a33"} Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.777184 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kfrrc" event={"ID":"51afacd6-c090-45db-aa5c-da8b53734401","Type":"ContainerDied","Data":"cb2a7e2acaf5671c0440b128cc5068e5b2608788aeef2d062844da271669a265"} Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.777228 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb2a7e2acaf5671c0440b128cc5068e5b2608788aeef2d062844da271669a265" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.777300 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kfrrc" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.794019 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0bdd7ef-5424-4628-b866-0aa42d44f257","Type":"ContainerStarted","Data":"bde0dfdb973e64b63035ae69186cf5b44e448ad2da47174ca17f5d68bd341678"} Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.806930 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41274a07-eb55-48ed-9ebd-35edbc26f4f4","Type":"ContainerStarted","Data":"2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec"} Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.805321 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2th2m" podStartSLOduration=3.4602055910000002 podStartE2EDuration="56.80529826s" podCreationTimestamp="2025-12-16 12:15:08 +0000 UTC" firstStartedPulling="2025-12-16 12:15:10.152369193 +0000 UTC m=+1183.870626998" lastFinishedPulling="2025-12-16 12:16:03.497461862 +0000 UTC m=+1237.215719667" observedRunningTime="2025-12-16 12:16:04.786361657 +0000 UTC m=+1238.504619462" watchObservedRunningTime="2025-12-16 12:16:04.80529826 +0000 UTC m=+1238.523556075" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.874205 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7ff49bdd98-ng49z"] Dec 16 12:16:04 crc kubenswrapper[4805]: E1216 12:16:04.874709 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a28faa94-62ae-49fa-8f01-8dff4989044e" containerName="dnsmasq-dns" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.874731 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a28faa94-62ae-49fa-8f01-8dff4989044e" containerName="dnsmasq-dns" Dec 16 12:16:04 crc kubenswrapper[4805]: E1216 12:16:04.874752 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a28faa94-62ae-49fa-8f01-8dff4989044e" containerName="init" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.874758 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a28faa94-62ae-49fa-8f01-8dff4989044e" containerName="init" Dec 16 12:16:04 crc kubenswrapper[4805]: E1216 12:16:04.874769 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51afacd6-c090-45db-aa5c-da8b53734401" containerName="keystone-bootstrap" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.874776 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="51afacd6-c090-45db-aa5c-da8b53734401" containerName="keystone-bootstrap" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.874953 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="51afacd6-c090-45db-aa5c-da8b53734401" containerName="keystone-bootstrap" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.874963 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a28faa94-62ae-49fa-8f01-8dff4989044e" containerName="dnsmasq-dns" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.875650 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.881003 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7ff49bdd98-ng49z"] Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.881627 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.881865 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.890439 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-d6pz6" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.890499 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.890829 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 16 12:16:04 crc kubenswrapper[4805]: I1216 12:16:04.890927 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.014606 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-combined-ca-bundle\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.014809 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-scripts\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.015003 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-config-data\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.015248 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-credential-keys\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.015328 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-internal-tls-certs\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.015379 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-fernet-keys\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.015394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-public-tls-certs\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.015409 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glnhj\" (UniqueName: \"kubernetes.io/projected/234428eb-9306-44b8-baf0-e4c6c0772699-kube-api-access-glnhj\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.116735 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-scripts\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.116809 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-config-data\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.116924 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-credential-keys\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.116978 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-internal-tls-certs\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.117026 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-fernet-keys\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.117054 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-public-tls-certs\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.117080 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glnhj\" (UniqueName: \"kubernetes.io/projected/234428eb-9306-44b8-baf0-e4c6c0772699-kube-api-access-glnhj\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.117160 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-combined-ca-bundle\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.145163 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-combined-ca-bundle\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.153558 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-public-tls-certs\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.153663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-scripts\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.164525 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-credential-keys\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.164900 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-internal-tls-certs\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.166100 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-config-data\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.187583 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/234428eb-9306-44b8-baf0-e4c6c0772699-fernet-keys\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.201674 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glnhj\" (UniqueName: \"kubernetes.io/projected/234428eb-9306-44b8-baf0-e4c6c0772699-kube-api-access-glnhj\") pod \"keystone-7ff49bdd98-ng49z\" (UID: \"234428eb-9306-44b8-baf0-e4c6c0772699\") " pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.227211 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.819970 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7ff49bdd98-ng49z"] Dec 16 12:16:05 crc kubenswrapper[4805]: W1216 12:16:05.828136 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod234428eb_9306_44b8_baf0_e4c6c0772699.slice/crio-40de4030af81fe50011d16231ae1aff12b07be11f9c3b6b99114f0a42cf50d51 WatchSource:0}: Error finding container 40de4030af81fe50011d16231ae1aff12b07be11f9c3b6b99114f0a42cf50d51: Status 404 returned error can't find the container with id 40de4030af81fe50011d16231ae1aff12b07be11f9c3b6b99114f0a42cf50d51 Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.829348 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0bdd7ef-5424-4628-b866-0aa42d44f257","Type":"ContainerStarted","Data":"9bdf3d24720c99d7ad95d8dbacf0c09ce9332ae14b7624c92ceba29805599bca"} Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.834282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41274a07-eb55-48ed-9ebd-35edbc26f4f4","Type":"ContainerStarted","Data":"8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13"} Dec 16 12:16:05 crc kubenswrapper[4805]: I1216 12:16:05.860704 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.860686629 podStartE2EDuration="12.860686629s" podCreationTimestamp="2025-12-16 12:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:05.85585974 +0000 UTC m=+1239.574117555" watchObservedRunningTime="2025-12-16 12:16:05.860686629 +0000 UTC m=+1239.578944444" Dec 16 12:16:06 crc kubenswrapper[4805]: I1216 12:16:06.844642 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7ff49bdd98-ng49z" event={"ID":"234428eb-9306-44b8-baf0-e4c6c0772699","Type":"ContainerStarted","Data":"558a0cf86f90b7414af31d9dd08409ba843cbf951145efc6284ca801086d7490"} Dec 16 12:16:06 crc kubenswrapper[4805]: I1216 12:16:06.845864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7ff49bdd98-ng49z" event={"ID":"234428eb-9306-44b8-baf0-e4c6c0772699","Type":"ContainerStarted","Data":"40de4030af81fe50011d16231ae1aff12b07be11f9c3b6b99114f0a42cf50d51"} Dec 16 12:16:06 crc kubenswrapper[4805]: I1216 12:16:06.846241 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9v2nl" event={"ID":"aa98eb47-6335-4544-b5d6-718b55075000","Type":"ContainerStarted","Data":"c8c5f0d29b119f9ab184cf85bddef944ae6359182ffbe96fd8612c9df390f09b"} Dec 16 12:16:06 crc kubenswrapper[4805]: I1216 12:16:06.869660 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=13.869642325000001 podStartE2EDuration="13.869642325s" podCreationTimestamp="2025-12-16 12:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:06.863060266 +0000 UTC m=+1240.581318071" watchObservedRunningTime="2025-12-16 12:16:06.869642325 +0000 UTC m=+1240.587900140" Dec 16 12:16:06 crc kubenswrapper[4805]: I1216 12:16:06.881404 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9v2nl" podStartSLOduration=4.430547831 podStartE2EDuration="58.881388772s" podCreationTimestamp="2025-12-16 12:15:08 +0000 UTC" firstStartedPulling="2025-12-16 12:15:10.05846719 +0000 UTC m=+1183.776725005" lastFinishedPulling="2025-12-16 12:16:04.509308141 +0000 UTC m=+1238.227565946" observedRunningTime="2025-12-16 12:16:06.878907901 +0000 UTC m=+1240.597165706" watchObservedRunningTime="2025-12-16 12:16:06.881388772 +0000 UTC m=+1240.599646587" Dec 16 12:16:07 crc kubenswrapper[4805]: I1216 12:16:07.062322 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6d8fcdbb66-gmhjg" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 16 12:16:07 crc kubenswrapper[4805]: I1216 12:16:07.172570 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69799999fb-rbm4h" podUID="d4af4b9e-77b2-4f27-8148-7000d60f2266" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 16 12:16:07 crc kubenswrapper[4805]: I1216 12:16:07.857295 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:07 crc kubenswrapper[4805]: I1216 12:16:07.884687 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7ff49bdd98-ng49z" podStartSLOduration=3.884661935 podStartE2EDuration="3.884661935s" podCreationTimestamp="2025-12-16 12:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:07.872477025 +0000 UTC m=+1241.590734840" watchObservedRunningTime="2025-12-16 12:16:07.884661935 +0000 UTC m=+1241.602919750" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.426254 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.426827 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.745115 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.745213 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.845838 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.846162 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.846892 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.847814 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.929499 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.929565 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.929584 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 16 12:16:14 crc kubenswrapper[4805]: I1216 12:16:14.929601 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 16 12:16:17 crc kubenswrapper[4805]: I1216 12:16:17.060874 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6d8fcdbb66-gmhjg" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 16 12:16:17 crc kubenswrapper[4805]: I1216 12:16:17.168899 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69799999fb-rbm4h" podUID="d4af4b9e-77b2-4f27-8148-7000d60f2266" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 16 12:16:20 crc kubenswrapper[4805]: I1216 12:16:20.106755 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 16 12:16:20 crc kubenswrapper[4805]: I1216 12:16:20.108575 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 16 12:16:20 crc kubenswrapper[4805]: I1216 12:16:20.110671 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 16 12:16:20 crc kubenswrapper[4805]: I1216 12:16:20.110852 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:16:20 crc kubenswrapper[4805]: I1216 12:16:20.112880 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 16 12:16:22 crc kubenswrapper[4805]: E1216 12:16:22.059225 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 16 12:16:22 crc kubenswrapper[4805]: E1216 12:16:22.059679 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pscpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a26a23d3-144a-4e2d-8ce2-63a63da575ff): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 16 12:16:22 crc kubenswrapper[4805]: E1216 12:16:22.060889 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" Dec 16 12:16:23 crc kubenswrapper[4805]: I1216 12:16:23.001624 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" containerName="ceilometer-notification-agent" containerID="cri-o://5a4226d7d5c10ea6eabbcc1f9aeff4c24f8c6544c52123b8d1077e134a6f849a" gracePeriod=30 Dec 16 12:16:23 crc kubenswrapper[4805]: I1216 12:16:23.001752 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" containerName="sg-core" containerID="cri-o://7b6c185bf76cea5cf507cad0bd80bb4004ac11a3da0d298a3b60e93d0c6136d5" gracePeriod=30 Dec 16 12:16:24 crc kubenswrapper[4805]: I1216 12:16:24.011878 4805 generic.go:334] "Generic (PLEG): container finished" podID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" containerID="7b6c185bf76cea5cf507cad0bd80bb4004ac11a3da0d298a3b60e93d0c6136d5" exitCode=2 Dec 16 12:16:24 crc kubenswrapper[4805]: I1216 12:16:24.011957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a26a23d3-144a-4e2d-8ce2-63a63da575ff","Type":"ContainerDied","Data":"7b6c185bf76cea5cf507cad0bd80bb4004ac11a3da0d298a3b60e93d0c6136d5"} Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.040583 4805 generic.go:334] "Generic (PLEG): container finished" podID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" containerID="5a4226d7d5c10ea6eabbcc1f9aeff4c24f8c6544c52123b8d1077e134a6f849a" exitCode=0 Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.040741 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a26a23d3-144a-4e2d-8ce2-63a63da575ff","Type":"ContainerDied","Data":"5a4226d7d5c10ea6eabbcc1f9aeff4c24f8c6544c52123b8d1077e134a6f849a"} Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.379486 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.443405 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-sg-core-conf-yaml\") pod \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.443492 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-log-httpd\") pod \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.443535 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-combined-ca-bundle\") pod \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.443599 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-run-httpd\") pod \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.443704 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pscpn\" (UniqueName: \"kubernetes.io/projected/a26a23d3-144a-4e2d-8ce2-63a63da575ff-kube-api-access-pscpn\") pod \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.444188 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-scripts\") pod \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.444209 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-config-data\") pod \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\" (UID: \"a26a23d3-144a-4e2d-8ce2-63a63da575ff\") " Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.444066 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a26a23d3-144a-4e2d-8ce2-63a63da575ff" (UID: "a26a23d3-144a-4e2d-8ce2-63a63da575ff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.444735 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.445491 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a26a23d3-144a-4e2d-8ce2-63a63da575ff" (UID: "a26a23d3-144a-4e2d-8ce2-63a63da575ff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.453461 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a26a23d3-144a-4e2d-8ce2-63a63da575ff-kube-api-access-pscpn" (OuterVolumeSpecName: "kube-api-access-pscpn") pod "a26a23d3-144a-4e2d-8ce2-63a63da575ff" (UID: "a26a23d3-144a-4e2d-8ce2-63a63da575ff"). InnerVolumeSpecName "kube-api-access-pscpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.454609 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-scripts" (OuterVolumeSpecName: "scripts") pod "a26a23d3-144a-4e2d-8ce2-63a63da575ff" (UID: "a26a23d3-144a-4e2d-8ce2-63a63da575ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.471414 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a26a23d3-144a-4e2d-8ce2-63a63da575ff" (UID: "a26a23d3-144a-4e2d-8ce2-63a63da575ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.486300 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a26a23d3-144a-4e2d-8ce2-63a63da575ff" (UID: "a26a23d3-144a-4e2d-8ce2-63a63da575ff"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.486354 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-config-data" (OuterVolumeSpecName: "config-data") pod "a26a23d3-144a-4e2d-8ce2-63a63da575ff" (UID: "a26a23d3-144a-4e2d-8ce2-63a63da575ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.546667 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.546731 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a26a23d3-144a-4e2d-8ce2-63a63da575ff-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.546742 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.546752 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pscpn\" (UniqueName: \"kubernetes.io/projected/a26a23d3-144a-4e2d-8ce2-63a63da575ff-kube-api-access-pscpn\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.546764 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:27 crc kubenswrapper[4805]: I1216 12:16:27.546773 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a26a23d3-144a-4e2d-8ce2-63a63da575ff-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.052738 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a26a23d3-144a-4e2d-8ce2-63a63da575ff","Type":"ContainerDied","Data":"5fdafa2282a66f659c414130a8989b6a02f4a61d6c35f5fdf7e2a818ed1ddeb7"} Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.053040 4805 scope.go:117] "RemoveContainer" containerID="7b6c185bf76cea5cf507cad0bd80bb4004ac11a3da0d298a3b60e93d0c6136d5" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.054128 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.086993 4805 scope.go:117] "RemoveContainer" containerID="5a4226d7d5c10ea6eabbcc1f9aeff4c24f8c6544c52123b8d1077e134a6f849a" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.149650 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.160184 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.168341 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:16:28 crc kubenswrapper[4805]: E1216 12:16:28.168702 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" containerName="sg-core" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.168723 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" containerName="sg-core" Dec 16 12:16:28 crc kubenswrapper[4805]: E1216 12:16:28.168749 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" containerName="ceilometer-notification-agent" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.168757 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" containerName="ceilometer-notification-agent" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.168912 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" containerName="ceilometer-notification-agent" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.168935 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" containerName="sg-core" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.172057 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.176373 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.176635 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.185231 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.261925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ft6t\" (UniqueName: \"kubernetes.io/projected/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-kube-api-access-6ft6t\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.262116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-run-httpd\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.262199 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-scripts\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.262242 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.262269 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.262299 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-log-httpd\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.262460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-config-data\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.365101 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ft6t\" (UniqueName: \"kubernetes.io/projected/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-kube-api-access-6ft6t\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.365599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-run-httpd\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.365679 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-scripts\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.365701 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.365731 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.365766 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-log-httpd\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.365863 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-config-data\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.366637 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-log-httpd\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.366713 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-run-httpd\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.371941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.376425 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-config-data\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.382631 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-scripts\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.388099 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.410616 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ft6t\" (UniqueName: \"kubernetes.io/projected/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-kube-api-access-6ft6t\") pod \"ceilometer-0\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.493385 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:16:28 crc kubenswrapper[4805]: I1216 12:16:28.533889 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a26a23d3-144a-4e2d-8ce2-63a63da575ff" path="/var/lib/kubelet/pods/a26a23d3-144a-4e2d-8ce2-63a63da575ff/volumes" Dec 16 12:16:29 crc kubenswrapper[4805]: I1216 12:16:29.212748 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:16:30 crc kubenswrapper[4805]: I1216 12:16:30.094239 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2","Type":"ContainerStarted","Data":"a37c176ee101f8ef3622a06499197c594d98581d6729b661b33c6a3c975b6805"} Dec 16 12:16:30 crc kubenswrapper[4805]: I1216 12:16:30.315674 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:16:30 crc kubenswrapper[4805]: I1216 12:16:30.349554 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:16:32 crc kubenswrapper[4805]: I1216 12:16:32.568929 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:16:32 crc kubenswrapper[4805]: I1216 12:16:32.601569 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-69799999fb-rbm4h" Dec 16 12:16:32 crc kubenswrapper[4805]: I1216 12:16:32.758552 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d8fcdbb66-gmhjg"] Dec 16 12:16:33 crc kubenswrapper[4805]: I1216 12:16:33.123836 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2","Type":"ContainerStarted","Data":"016121b0bc4f0bb282bb62dce5a20f5d6943760c34026bf99981e0592c0dae6a"} Dec 16 12:16:33 crc kubenswrapper[4805]: I1216 12:16:33.124190 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2","Type":"ContainerStarted","Data":"931670af68babbfee261e142cb86cac906c0c9a9ea64134ade4fb7d0a0e3c0c3"} Dec 16 12:16:33 crc kubenswrapper[4805]: I1216 12:16:33.124024 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d8fcdbb66-gmhjg" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon-log" containerID="cri-o://985f88c1f9a128dbb9319d7a54a3e3a7b4e72ec8f5b8e705eb481b82a7cd4102" gracePeriod=30 Dec 16 12:16:33 crc kubenswrapper[4805]: I1216 12:16:33.124285 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d8fcdbb66-gmhjg" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon" containerID="cri-o://0d12cc96b864154bb1e684e6dd7cc9990d58ab7a48b62303008832bebe1d0a2b" gracePeriod=30 Dec 16 12:16:35 crc kubenswrapper[4805]: I1216 12:16:35.144363 4805 generic.go:334] "Generic (PLEG): container finished" podID="bedb416e-1423-4c43-8676-f6843c51c7b0" containerID="c098b155aef353dd865ed392e4d7db83c6b4cf9da55f74658a418b5c260f9c3c" exitCode=0 Dec 16 12:16:35 crc kubenswrapper[4805]: I1216 12:16:35.144623 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l88fx" event={"ID":"bedb416e-1423-4c43-8676-f6843c51c7b0","Type":"ContainerDied","Data":"c098b155aef353dd865ed392e4d7db83c6b4cf9da55f74658a418b5c260f9c3c"} Dec 16 12:16:35 crc kubenswrapper[4805]: I1216 12:16:35.146693 4805 generic.go:334] "Generic (PLEG): container finished" podID="ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" containerID="62eb1b9103f333ef9de12ee3767db5b2504d85daf1b363217f7912283f710a33" exitCode=0 Dec 16 12:16:35 crc kubenswrapper[4805]: I1216 12:16:35.146732 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2th2m" event={"ID":"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878","Type":"ContainerDied","Data":"62eb1b9103f333ef9de12ee3767db5b2504d85daf1b363217f7912283f710a33"} Dec 16 12:16:35 crc kubenswrapper[4805]: I1216 12:16:35.148386 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2","Type":"ContainerStarted","Data":"f22085a138dff8c979e5ba861a288a8d56e982e940f4a31e2318f5e706a9b221"} Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.542871 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2th2m" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.562555 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l88fx" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.652069 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbkbl\" (UniqueName: \"kubernetes.io/projected/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-kube-api-access-kbkbl\") pod \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.652123 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-combined-ca-bundle\") pod \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.652221 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5t2p\" (UniqueName: \"kubernetes.io/projected/bedb416e-1423-4c43-8676-f6843c51c7b0-kube-api-access-m5t2p\") pod \"bedb416e-1423-4c43-8676-f6843c51c7b0\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.652279 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-logs\") pod \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.652320 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-db-sync-config-data\") pod \"bedb416e-1423-4c43-8676-f6843c51c7b0\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.652358 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-combined-ca-bundle\") pod \"bedb416e-1423-4c43-8676-f6843c51c7b0\" (UID: \"bedb416e-1423-4c43-8676-f6843c51c7b0\") " Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.652412 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-scripts\") pod \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.652428 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-config-data\") pod \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\" (UID: \"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878\") " Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.655447 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-logs" (OuterVolumeSpecName: "logs") pod "ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" (UID: "ed0d4cec-a7ad-4887-8dc8-e7da47a7c878"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.660628 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-scripts" (OuterVolumeSpecName: "scripts") pod "ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" (UID: "ed0d4cec-a7ad-4887-8dc8-e7da47a7c878"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.664038 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedb416e-1423-4c43-8676-f6843c51c7b0-kube-api-access-m5t2p" (OuterVolumeSpecName: "kube-api-access-m5t2p") pod "bedb416e-1423-4c43-8676-f6843c51c7b0" (UID: "bedb416e-1423-4c43-8676-f6843c51c7b0"). InnerVolumeSpecName "kube-api-access-m5t2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.666765 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-kube-api-access-kbkbl" (OuterVolumeSpecName: "kube-api-access-kbkbl") pod "ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" (UID: "ed0d4cec-a7ad-4887-8dc8-e7da47a7c878"). InnerVolumeSpecName "kube-api-access-kbkbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.670259 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bedb416e-1423-4c43-8676-f6843c51c7b0" (UID: "bedb416e-1423-4c43-8676-f6843c51c7b0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.712253 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-config-data" (OuterVolumeSpecName: "config-data") pod "ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" (UID: "ed0d4cec-a7ad-4887-8dc8-e7da47a7c878"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.719479 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bedb416e-1423-4c43-8676-f6843c51c7b0" (UID: "bedb416e-1423-4c43-8676-f6843c51c7b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.730885 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" (UID: "ed0d4cec-a7ad-4887-8dc8-e7da47a7c878"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.753972 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.754011 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.754022 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbkbl\" (UniqueName: \"kubernetes.io/projected/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-kube-api-access-kbkbl\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.754033 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.754042 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5t2p\" (UniqueName: \"kubernetes.io/projected/bedb416e-1423-4c43-8676-f6843c51c7b0-kube-api-access-m5t2p\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.754052 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.754064 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:36 crc kubenswrapper[4805]: I1216 12:16:36.754074 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bedb416e-1423-4c43-8676-f6843c51c7b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.061000 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d8fcdbb66-gmhjg" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.203317 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2","Type":"ContainerStarted","Data":"71a2103655edc4a8a6550f857ae913ee1e0b6bf530a88798caa5255a70a8d68e"} Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.205092 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.208824 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l88fx" event={"ID":"bedb416e-1423-4c43-8676-f6843c51c7b0","Type":"ContainerDied","Data":"275a55bb31eb7b116c2887e9cb35be196db04085a23e04d096e2f439d877790e"} Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.208874 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="275a55bb31eb7b116c2887e9cb35be196db04085a23e04d096e2f439d877790e" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.208997 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l88fx" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.231299 4805 generic.go:334] "Generic (PLEG): container finished" podID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerID="0d12cc96b864154bb1e684e6dd7cc9990d58ab7a48b62303008832bebe1d0a2b" exitCode=0 Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.231706 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d8fcdbb66-gmhjg" event={"ID":"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c","Type":"ContainerDied","Data":"0d12cc96b864154bb1e684e6dd7cc9990d58ab7a48b62303008832bebe1d0a2b"} Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.243844 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2th2m" event={"ID":"ed0d4cec-a7ad-4887-8dc8-e7da47a7c878","Type":"ContainerDied","Data":"186345c497ecaa9af594381d5783fce3466b8a13cbfd7557b541cb8fe21c6fff"} Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.243908 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="186345c497ecaa9af594381d5783fce3466b8a13cbfd7557b541cb8fe21c6fff" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.243959 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2th2m" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.275919 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.639900446 podStartE2EDuration="9.275894843s" podCreationTimestamp="2025-12-16 12:16:28 +0000 UTC" firstStartedPulling="2025-12-16 12:16:29.224386039 +0000 UTC m=+1262.942643844" lastFinishedPulling="2025-12-16 12:16:35.860380436 +0000 UTC m=+1269.578638241" observedRunningTime="2025-12-16 12:16:37.235717961 +0000 UTC m=+1270.953975776" watchObservedRunningTime="2025-12-16 12:16:37.275894843 +0000 UTC m=+1270.994152658" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.409329 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-775868cbc4-vvjnt"] Dec 16 12:16:37 crc kubenswrapper[4805]: E1216 12:16:37.409726 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedb416e-1423-4c43-8676-f6843c51c7b0" containerName="barbican-db-sync" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.409743 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedb416e-1423-4c43-8676-f6843c51c7b0" containerName="barbican-db-sync" Dec 16 12:16:37 crc kubenswrapper[4805]: E1216 12:16:37.409753 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" containerName="placement-db-sync" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.409759 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" containerName="placement-db-sync" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.410004 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bedb416e-1423-4c43-8676-f6843c51c7b0" containerName="barbican-db-sync" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.410024 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" containerName="placement-db-sync" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.410973 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.417103 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kx4cl" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.417224 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.417349 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.417639 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.417664 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.469567 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-internal-tls-certs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.469653 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-scripts\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.469688 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-public-tls-certs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.469715 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-combined-ca-bundle\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.469956 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dppqs\" (UniqueName: \"kubernetes.io/projected/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-kube-api-access-dppqs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.470083 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-config-data\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.470102 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-logs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.497088 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-775868cbc4-vvjnt"] Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.569315 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-c4448787f-5pqnk"] Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.571666 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-combined-ca-bundle\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.571716 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dppqs\" (UniqueName: \"kubernetes.io/projected/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-kube-api-access-dppqs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.571760 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-config-data\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.571777 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-logs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.571834 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-internal-tls-certs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.571880 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-scripts\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.571908 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-public-tls-certs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.572550 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-logs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.572665 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.577585 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-public-tls-certs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.577719 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-internal-tls-certs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.579548 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-scripts\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.601530 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.602336 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.602469 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4z59v" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.615091 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-combined-ca-bundle\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.632188 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dppqs\" (UniqueName: \"kubernetes.io/projected/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-kube-api-access-dppqs\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.644124 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c4448787f-5pqnk"] Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.673190 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxqm\" (UniqueName: \"kubernetes.io/projected/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-kube-api-access-5rxqm\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.673324 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-config-data\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.673391 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-combined-ca-bundle\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.673427 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-config-data-custom\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.673471 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-logs\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.678314 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e-config-data\") pod \"placement-775868cbc4-vvjnt\" (UID: \"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e\") " pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.731694 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.775751 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-logs\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.776093 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rxqm\" (UniqueName: \"kubernetes.io/projected/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-kube-api-access-5rxqm\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.776199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-config-data\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.776280 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-combined-ca-bundle\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.776330 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-config-data-custom\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.776399 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-logs\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.782876 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-config-data-custom\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.784810 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-config-data\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.788776 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-combined-ca-bundle\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.811776 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-xrd22"] Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.822222 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.840047 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rxqm\" (UniqueName: \"kubernetes.io/projected/c60ef5b9-ec24-43c3-ab83-7a6f10a972bc-kube-api-access-5rxqm\") pod \"barbican-worker-c4448787f-5pqnk\" (UID: \"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc\") " pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.850997 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-xrd22"] Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.880605 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njq2g\" (UniqueName: \"kubernetes.io/projected/a451754e-cb72-4c93-833e-2c10d5b3bea6-kube-api-access-njq2g\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.880972 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-config\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.881126 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.881299 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.881468 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.881649 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.938691 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-64dcddf9dd-sdrs9"] Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.940455 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.948514 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.983902 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-config\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.983983 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.984032 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.984086 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.984133 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beba05fa-4b9d-44f2-88f4-87611a38604b-config-data\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.984190 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beba05fa-4b9d-44f2-88f4-87611a38604b-config-data-custom\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.984224 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.984252 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beba05fa-4b9d-44f2-88f4-87611a38604b-logs\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.984348 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njq2g\" (UniqueName: \"kubernetes.io/projected/a451754e-cb72-4c93-833e-2c10d5b3bea6-kube-api-access-njq2g\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.984376 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh92p\" (UniqueName: \"kubernetes.io/projected/beba05fa-4b9d-44f2-88f4-87611a38604b-kube-api-access-wh92p\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.984401 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beba05fa-4b9d-44f2-88f4-87611a38604b-combined-ca-bundle\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.985753 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-config\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.992957 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.993712 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:37 crc kubenswrapper[4805]: I1216 12:16:37.994134 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.000304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.011276 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64dcddf9dd-sdrs9"] Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.048787 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c4448787f-5pqnk" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.053632 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njq2g\" (UniqueName: \"kubernetes.io/projected/a451754e-cb72-4c93-833e-2c10d5b3bea6-kube-api-access-njq2g\") pod \"dnsmasq-dns-59d5ff467f-xrd22\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.091039 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beba05fa-4b9d-44f2-88f4-87611a38604b-config-data\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.091123 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beba05fa-4b9d-44f2-88f4-87611a38604b-config-data-custom\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.091179 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beba05fa-4b9d-44f2-88f4-87611a38604b-logs\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.091279 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh92p\" (UniqueName: \"kubernetes.io/projected/beba05fa-4b9d-44f2-88f4-87611a38604b-kube-api-access-wh92p\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.091306 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beba05fa-4b9d-44f2-88f4-87611a38604b-combined-ca-bundle\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.106833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beba05fa-4b9d-44f2-88f4-87611a38604b-logs\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.124721 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/beba05fa-4b9d-44f2-88f4-87611a38604b-config-data-custom\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.130163 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beba05fa-4b9d-44f2-88f4-87611a38604b-config-data\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.134892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beba05fa-4b9d-44f2-88f4-87611a38604b-combined-ca-bundle\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.142675 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b9975d546-lhbk6"] Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.144223 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.147411 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.178863 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b9975d546-lhbk6"] Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.180779 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh92p\" (UniqueName: \"kubernetes.io/projected/beba05fa-4b9d-44f2-88f4-87611a38604b-kube-api-access-wh92p\") pod \"barbican-keystone-listener-64dcddf9dd-sdrs9\" (UID: \"beba05fa-4b9d-44f2-88f4-87611a38604b\") " pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.195261 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.197012 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data-custom\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.197129 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83820f81-bf79-42be-9f7f-5a67edb260ff-logs\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.197181 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.197219 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qrcl\" (UniqueName: \"kubernetes.io/projected/83820f81-bf79-42be-9f7f-5a67edb260ff-kube-api-access-8qrcl\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.197257 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-combined-ca-bundle\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.292066 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.305279 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83820f81-bf79-42be-9f7f-5a67edb260ff-logs\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.305331 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.305367 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qrcl\" (UniqueName: \"kubernetes.io/projected/83820f81-bf79-42be-9f7f-5a67edb260ff-kube-api-access-8qrcl\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.305401 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-combined-ca-bundle\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.305456 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data-custom\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.312630 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83820f81-bf79-42be-9f7f-5a67edb260ff-logs\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.322106 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-combined-ca-bundle\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.334796 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data-custom\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.335003 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.362081 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qrcl\" (UniqueName: \"kubernetes.io/projected/83820f81-bf79-42be-9f7f-5a67edb260ff-kube-api-access-8qrcl\") pod \"barbican-api-6b9975d546-lhbk6\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:38 crc kubenswrapper[4805]: I1216 12:16:38.645711 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:39 crc kubenswrapper[4805]: I1216 12:16:39.025441 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-775868cbc4-vvjnt"] Dec 16 12:16:39 crc kubenswrapper[4805]: I1216 12:16:39.061246 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c4448787f-5pqnk"] Dec 16 12:16:39 crc kubenswrapper[4805]: I1216 12:16:39.431547 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c4448787f-5pqnk" event={"ID":"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc","Type":"ContainerStarted","Data":"d4b266367735d1a6d7d94bf29573d6770171938abe82bc19b29afb3c30a4280e"} Dec 16 12:16:39 crc kubenswrapper[4805]: I1216 12:16:39.439469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-775868cbc4-vvjnt" event={"ID":"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e","Type":"ContainerStarted","Data":"b5f4e32dcfb8e6e9a28dc9fa40cf246ff86981f40cfee4dc11c1ee32f3d6b9b9"} Dec 16 12:16:39 crc kubenswrapper[4805]: I1216 12:16:39.490228 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-xrd22"] Dec 16 12:16:39 crc kubenswrapper[4805]: I1216 12:16:39.571785 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64dcddf9dd-sdrs9"] Dec 16 12:16:39 crc kubenswrapper[4805]: I1216 12:16:39.813574 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b9975d546-lhbk6"] Dec 16 12:16:39 crc kubenswrapper[4805]: W1216 12:16:39.837479 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83820f81_bf79_42be_9f7f_5a67edb260ff.slice/crio-b90cde3743d33668186decc90873490014061e8d11c4312b4e1ea96f188d7482 WatchSource:0}: Error finding container b90cde3743d33668186decc90873490014061e8d11c4312b4e1ea96f188d7482: Status 404 returned error can't find the container with id b90cde3743d33668186decc90873490014061e8d11c4312b4e1ea96f188d7482 Dec 16 12:16:40 crc kubenswrapper[4805]: I1216 12:16:40.451287 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" event={"ID":"beba05fa-4b9d-44f2-88f4-87611a38604b","Type":"ContainerStarted","Data":"0ed475d97f2d1446aeecc00cc9eef238069d4baa1b4e1059f9e7fcdac717c6f7"} Dec 16 12:16:40 crc kubenswrapper[4805]: I1216 12:16:40.458951 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-775868cbc4-vvjnt" event={"ID":"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e","Type":"ContainerStarted","Data":"db06962f0d5a133de427daf2af2bc4ffa97dd27713188420efc54173e33cd6b7"} Dec 16 12:16:40 crc kubenswrapper[4805]: I1216 12:16:40.464843 4805 generic.go:334] "Generic (PLEG): container finished" podID="a451754e-cb72-4c93-833e-2c10d5b3bea6" containerID="4e86dd9a447d7aea5b9f40c28a518dd59ed11293751c3a0bcac02a831c7e65eb" exitCode=0 Dec 16 12:16:40 crc kubenswrapper[4805]: I1216 12:16:40.464957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" event={"ID":"a451754e-cb72-4c93-833e-2c10d5b3bea6","Type":"ContainerDied","Data":"4e86dd9a447d7aea5b9f40c28a518dd59ed11293751c3a0bcac02a831c7e65eb"} Dec 16 12:16:40 crc kubenswrapper[4805]: I1216 12:16:40.464998 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" event={"ID":"a451754e-cb72-4c93-833e-2c10d5b3bea6","Type":"ContainerStarted","Data":"0bd3257e223f50005aff9bc6f7e4901f28a13b6b76a107de091124929c393bff"} Dec 16 12:16:40 crc kubenswrapper[4805]: I1216 12:16:40.490279 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9975d546-lhbk6" event={"ID":"83820f81-bf79-42be-9f7f-5a67edb260ff","Type":"ContainerStarted","Data":"c4762fbda7306250fab7b9364cee72c58f40a32c4ed3ef3479cb4d579d5c1000"} Dec 16 12:16:40 crc kubenswrapper[4805]: I1216 12:16:40.490345 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9975d546-lhbk6" event={"ID":"83820f81-bf79-42be-9f7f-5a67edb260ff","Type":"ContainerStarted","Data":"b90cde3743d33668186decc90873490014061e8d11c4312b4e1ea96f188d7482"} Dec 16 12:16:40 crc kubenswrapper[4805]: I1216 12:16:40.745595 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7ff49bdd98-ng49z" Dec 16 12:16:41 crc kubenswrapper[4805]: I1216 12:16:41.507161 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" event={"ID":"a451754e-cb72-4c93-833e-2c10d5b3bea6","Type":"ContainerStarted","Data":"a67bf69d873a42d15dcabd48737d4143249c1615916a1408ab38bba5b23cd924"} Dec 16 12:16:41 crc kubenswrapper[4805]: I1216 12:16:41.508833 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:41 crc kubenswrapper[4805]: I1216 12:16:41.511234 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9975d546-lhbk6" event={"ID":"83820f81-bf79-42be-9f7f-5a67edb260ff","Type":"ContainerStarted","Data":"fcf6edfd61ef6ee5f6531b8e36a426e6722ba33083c4b35e33e2ada3d3466c00"} Dec 16 12:16:41 crc kubenswrapper[4805]: I1216 12:16:41.511550 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:41 crc kubenswrapper[4805]: I1216 12:16:41.511587 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:16:41 crc kubenswrapper[4805]: I1216 12:16:41.513124 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-775868cbc4-vvjnt" event={"ID":"2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e","Type":"ContainerStarted","Data":"eb69d8e4921ab9590322e10f8a8efc5c35dc287a8cbd5bfbe9042d0ba94634ae"} Dec 16 12:16:41 crc kubenswrapper[4805]: I1216 12:16:41.513805 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:41 crc kubenswrapper[4805]: I1216 12:16:41.513842 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:16:41 crc kubenswrapper[4805]: I1216 12:16:41.558391 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" podStartSLOduration=4.558364493 podStartE2EDuration="4.558364493s" podCreationTimestamp="2025-12-16 12:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:41.548598533 +0000 UTC m=+1275.266856338" watchObservedRunningTime="2025-12-16 12:16:41.558364493 +0000 UTC m=+1275.276622318" Dec 16 12:16:41 crc kubenswrapper[4805]: I1216 12:16:41.580726 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b9975d546-lhbk6" podStartSLOduration=3.580700073 podStartE2EDuration="3.580700073s" podCreationTimestamp="2025-12-16 12:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:41.571745057 +0000 UTC m=+1275.290002872" watchObservedRunningTime="2025-12-16 12:16:41.580700073 +0000 UTC m=+1275.298957888" Dec 16 12:16:41 crc kubenswrapper[4805]: I1216 12:16:41.604599 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-775868cbc4-vvjnt" podStartSLOduration=4.6045689880000005 podStartE2EDuration="4.604568988s" podCreationTimestamp="2025-12-16 12:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:41.597340371 +0000 UTC m=+1275.315598176" watchObservedRunningTime="2025-12-16 12:16:41.604568988 +0000 UTC m=+1275.322826813" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.180431 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69799999fb-rbm4h" podUID="d4af4b9e-77b2-4f27-8148-7000d60f2266" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.188597 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-69799999fb-rbm4h" podUID="d4af4b9e-77b2-4f27-8148-7000d60f2266" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.672758 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c96b7bb8b-r7k4g"] Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.678268 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.683642 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.686705 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.728018 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c96b7bb8b-r7k4g"] Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.741901 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-internal-tls-certs\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.741967 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d8752f0-5469-43e4-9284-8dd712bfd63f-logs\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.742080 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-combined-ca-bundle\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.742113 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v22d\" (UniqueName: \"kubernetes.io/projected/7d8752f0-5469-43e4-9284-8dd712bfd63f-kube-api-access-5v22d\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.742172 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-config-data\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.742198 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-config-data-custom\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.742220 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-public-tls-certs\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.843635 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-combined-ca-bundle\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.844672 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v22d\" (UniqueName: \"kubernetes.io/projected/7d8752f0-5469-43e4-9284-8dd712bfd63f-kube-api-access-5v22d\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.844734 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-config-data\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.844774 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-config-data-custom\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.844796 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-public-tls-certs\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.844943 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-internal-tls-certs\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.844981 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d8752f0-5469-43e4-9284-8dd712bfd63f-logs\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.845418 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d8752f0-5469-43e4-9284-8dd712bfd63f-logs\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.853598 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-combined-ca-bundle\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.859411 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.861080 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.862740 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-internal-tls-certs\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.863471 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-config-data\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.865761 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-config-data-custom\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.868912 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d8752f0-5469-43e4-9284-8dd712bfd63f-public-tls-certs\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.869823 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.870192 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mlt9m" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.870462 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.888858 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 16 12:16:42 crc kubenswrapper[4805]: I1216 12:16:42.916013 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v22d\" (UniqueName: \"kubernetes.io/projected/7d8752f0-5469-43e4-9284-8dd712bfd63f-kube-api-access-5v22d\") pod \"barbican-api-6c96b7bb8b-r7k4g\" (UID: \"7d8752f0-5469-43e4-9284-8dd712bfd63f\") " pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.009649 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.049259 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61335104-9325-4589-bcbb-fb19f4273dc2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.049594 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61335104-9325-4589-bcbb-fb19f4273dc2-openstack-config\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.049778 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnhvt\" (UniqueName: \"kubernetes.io/projected/61335104-9325-4589-bcbb-fb19f4273dc2-kube-api-access-xnhvt\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.050003 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61335104-9325-4589-bcbb-fb19f4273dc2-openstack-config-secret\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.151689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61335104-9325-4589-bcbb-fb19f4273dc2-openstack-config-secret\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.151811 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61335104-9325-4589-bcbb-fb19f4273dc2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.151876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61335104-9325-4589-bcbb-fb19f4273dc2-openstack-config\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.151970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnhvt\" (UniqueName: \"kubernetes.io/projected/61335104-9325-4589-bcbb-fb19f4273dc2-kube-api-access-xnhvt\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.153274 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61335104-9325-4589-bcbb-fb19f4273dc2-openstack-config\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.159727 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61335104-9325-4589-bcbb-fb19f4273dc2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.172681 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnhvt\" (UniqueName: \"kubernetes.io/projected/61335104-9325-4589-bcbb-fb19f4273dc2-kube-api-access-xnhvt\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.173874 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61335104-9325-4589-bcbb-fb19f4273dc2-openstack-config-secret\") pod \"openstackclient\" (UID: \"61335104-9325-4589-bcbb-fb19f4273dc2\") " pod="openstack/openstackclient" Dec 16 12:16:43 crc kubenswrapper[4805]: I1216 12:16:43.303649 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 16 12:16:45 crc kubenswrapper[4805]: I1216 12:16:45.840365 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c96b7bb8b-r7k4g"] Dec 16 12:16:45 crc kubenswrapper[4805]: I1216 12:16:45.984300 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.608884 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"61335104-9325-4589-bcbb-fb19f4273dc2","Type":"ContainerStarted","Data":"4f738eaf10d3b9a9070a3cb9b6a2bb3a6535d58f269e4c90b305da9ab5c83469"} Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.620014 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c4448787f-5pqnk" event={"ID":"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc","Type":"ContainerStarted","Data":"ad30a2fd27a136ca79cc567ccfa98a4c5b641d2508053dd35c3b6be388962d66"} Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.620072 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c4448787f-5pqnk" event={"ID":"c60ef5b9-ec24-43c3-ab83-7a6f10a972bc","Type":"ContainerStarted","Data":"b9876ab5b0dd0b057c68f8d533af9a014a8f4b70c840f5a434b7a0bb3f9eb5ec"} Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.640334 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" event={"ID":"7d8752f0-5469-43e4-9284-8dd712bfd63f","Type":"ContainerStarted","Data":"ff525d8cddb3e7a0cab3ddcb317b10f58addf06450f7d78e18994caae375ba61"} Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.640383 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" event={"ID":"7d8752f0-5469-43e4-9284-8dd712bfd63f","Type":"ContainerStarted","Data":"312fdeaf423e27878f6839d296bf4fa680847e268c84c237e991ff414de3ad0e"} Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.640396 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" event={"ID":"7d8752f0-5469-43e4-9284-8dd712bfd63f","Type":"ContainerStarted","Data":"490ebb97847be0814e0d0b472de88c35224d07dd347b0b48517c57de689d8bd7"} Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.641270 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.641303 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.653336 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" event={"ID":"beba05fa-4b9d-44f2-88f4-87611a38604b","Type":"ContainerStarted","Data":"83d91f98c4cdd45d47bb20248a8f9b9a98ca7cd295d6f080ce4dfd110545cda4"} Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.653410 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" event={"ID":"beba05fa-4b9d-44f2-88f4-87611a38604b","Type":"ContainerStarted","Data":"6f183ce75bbc949a58b4dead698943e9586dc8139f6442859c05a529e7d88aba"} Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.679609 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-c4448787f-5pqnk" podStartSLOduration=3.603329992 podStartE2EDuration="9.679585927s" podCreationTimestamp="2025-12-16 12:16:37 +0000 UTC" firstStartedPulling="2025-12-16 12:16:39.189568616 +0000 UTC m=+1272.907826421" lastFinishedPulling="2025-12-16 12:16:45.265824561 +0000 UTC m=+1278.984082356" observedRunningTime="2025-12-16 12:16:46.644930193 +0000 UTC m=+1280.363188018" watchObservedRunningTime="2025-12-16 12:16:46.679585927 +0000 UTC m=+1280.397843752" Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.691451 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" podStartSLOduration=4.691424166 podStartE2EDuration="4.691424166s" podCreationTimestamp="2025-12-16 12:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:46.673106941 +0000 UTC m=+1280.391364746" watchObservedRunningTime="2025-12-16 12:16:46.691424166 +0000 UTC m=+1280.409681991" Dec 16 12:16:46 crc kubenswrapper[4805]: I1216 12:16:46.705491 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-64dcddf9dd-sdrs9" podStartSLOduration=4.027863217 podStartE2EDuration="9.705471209s" podCreationTimestamp="2025-12-16 12:16:37 +0000 UTC" firstStartedPulling="2025-12-16 12:16:39.589479545 +0000 UTC m=+1273.307737340" lastFinishedPulling="2025-12-16 12:16:45.267087527 +0000 UTC m=+1278.985345332" observedRunningTime="2025-12-16 12:16:46.690802018 +0000 UTC m=+1280.409059833" watchObservedRunningTime="2025-12-16 12:16:46.705471209 +0000 UTC m=+1280.423729034" Dec 16 12:16:47 crc kubenswrapper[4805]: I1216 12:16:47.061192 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d8fcdbb66-gmhjg" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 16 12:16:48 crc kubenswrapper[4805]: I1216 12:16:48.198366 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:16:48 crc kubenswrapper[4805]: I1216 12:16:48.378303 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rrbmf"] Dec 16 12:16:48 crc kubenswrapper[4805]: I1216 12:16:48.378843 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" podUID="aaba3492-18c7-49bd-90d6-7429d516cb3b" containerName="dnsmasq-dns" containerID="cri-o://ff9e1892082140c7071f1206934c7e81034c5b199c02d522a48a93d561f0d700" gracePeriod=10 Dec 16 12:16:48 crc kubenswrapper[4805]: I1216 12:16:48.816997 4805 generic.go:334] "Generic (PLEG): container finished" podID="aaba3492-18c7-49bd-90d6-7429d516cb3b" containerID="ff9e1892082140c7071f1206934c7e81034c5b199c02d522a48a93d561f0d700" exitCode=0 Dec 16 12:16:48 crc kubenswrapper[4805]: I1216 12:16:48.817551 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" event={"ID":"aaba3492-18c7-49bd-90d6-7429d516cb3b","Type":"ContainerDied","Data":"ff9e1892082140c7071f1206934c7e81034c5b199c02d522a48a93d561f0d700"} Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.595771 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.598588 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-svc\") pod \"aaba3492-18c7-49bd-90d6-7429d516cb3b\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.598688 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clvcm\" (UniqueName: \"kubernetes.io/projected/aaba3492-18c7-49bd-90d6-7429d516cb3b-kube-api-access-clvcm\") pod \"aaba3492-18c7-49bd-90d6-7429d516cb3b\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.598717 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-config\") pod \"aaba3492-18c7-49bd-90d6-7429d516cb3b\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.598741 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-swift-storage-0\") pod \"aaba3492-18c7-49bd-90d6-7429d516cb3b\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.598769 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-sb\") pod \"aaba3492-18c7-49bd-90d6-7429d516cb3b\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.598861 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-nb\") pod \"aaba3492-18c7-49bd-90d6-7429d516cb3b\" (UID: \"aaba3492-18c7-49bd-90d6-7429d516cb3b\") " Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.627575 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaba3492-18c7-49bd-90d6-7429d516cb3b-kube-api-access-clvcm" (OuterVolumeSpecName: "kube-api-access-clvcm") pod "aaba3492-18c7-49bd-90d6-7429d516cb3b" (UID: "aaba3492-18c7-49bd-90d6-7429d516cb3b"). InnerVolumeSpecName "kube-api-access-clvcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.701931 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clvcm\" (UniqueName: \"kubernetes.io/projected/aaba3492-18c7-49bd-90d6-7429d516cb3b-kube-api-access-clvcm\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.737052 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aaba3492-18c7-49bd-90d6-7429d516cb3b" (UID: "aaba3492-18c7-49bd-90d6-7429d516cb3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.768794 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-config" (OuterVolumeSpecName: "config") pod "aaba3492-18c7-49bd-90d6-7429d516cb3b" (UID: "aaba3492-18c7-49bd-90d6-7429d516cb3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.780772 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aaba3492-18c7-49bd-90d6-7429d516cb3b" (UID: "aaba3492-18c7-49bd-90d6-7429d516cb3b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.784698 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aaba3492-18c7-49bd-90d6-7429d516cb3b" (UID: "aaba3492-18c7-49bd-90d6-7429d516cb3b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.785700 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aaba3492-18c7-49bd-90d6-7429d516cb3b" (UID: "aaba3492-18c7-49bd-90d6-7429d516cb3b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.804590 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.804634 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.804645 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.804655 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.804664 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaba3492-18c7-49bd-90d6-7429d516cb3b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.836236 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" event={"ID":"aaba3492-18c7-49bd-90d6-7429d516cb3b","Type":"ContainerDied","Data":"2210d33322d252f5814018092aaae2cff8cc77518cf57a6dc32caf9389926ac7"} Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.836307 4805 scope.go:117] "RemoveContainer" containerID="ff9e1892082140c7071f1206934c7e81034c5b199c02d522a48a93d561f0d700" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.836586 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-rrbmf" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.864596 4805 scope.go:117] "RemoveContainer" containerID="fed3d206b56b67c98b096c52a182eb1def5dd61af473ba040f1ad1ee5e20d224" Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.891841 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rrbmf"] Dec 16 12:16:49 crc kubenswrapper[4805]: I1216 12:16:49.912707 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-rrbmf"] Dec 16 12:16:50 crc kubenswrapper[4805]: I1216 12:16:50.549897 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaba3492-18c7-49bd-90d6-7429d516cb3b" path="/var/lib/kubelet/pods/aaba3492-18c7-49bd-90d6-7429d516cb3b/volumes" Dec 16 12:16:52 crc kubenswrapper[4805]: I1216 12:16:52.750428 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:16:52 crc kubenswrapper[4805]: I1216 12:16:52.750374 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:16:53 crc kubenswrapper[4805]: I1216 12:16:53.474967 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 16 12:16:53 crc kubenswrapper[4805]: I1216 12:16:53.786430 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:16:55 crc kubenswrapper[4805]: I1216 12:16:55.747456 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:56 crc kubenswrapper[4805]: I1216 12:16:56.147682 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" Dec 16 12:16:56 crc kubenswrapper[4805]: I1216 12:16:56.319711 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b9975d546-lhbk6"] Dec 16 12:16:56 crc kubenswrapper[4805]: I1216 12:16:56.320263 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api-log" containerID="cri-o://c4762fbda7306250fab7b9364cee72c58f40a32c4ed3ef3479cb4d579d5c1000" gracePeriod=30 Dec 16 12:16:56 crc kubenswrapper[4805]: I1216 12:16:56.320640 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api" containerID="cri-o://fcf6edfd61ef6ee5f6531b8e36a426e6722ba33083c4b35e33e2ada3d3466c00" gracePeriod=30 Dec 16 12:16:56 crc kubenswrapper[4805]: I1216 12:16:56.352412 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": EOF" Dec 16 12:16:56 crc kubenswrapper[4805]: I1216 12:16:56.352758 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": EOF" Dec 16 12:16:56 crc kubenswrapper[4805]: I1216 12:16:56.362034 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": EOF" Dec 16 12:16:56 crc kubenswrapper[4805]: I1216 12:16:56.362415 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": EOF" Dec 16 12:16:56 crc kubenswrapper[4805]: I1216 12:16:56.981886 4805 generic.go:334] "Generic (PLEG): container finished" podID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerID="c4762fbda7306250fab7b9364cee72c58f40a32c4ed3ef3479cb4d579d5c1000" exitCode=143 Dec 16 12:16:56 crc kubenswrapper[4805]: I1216 12:16:56.981936 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9975d546-lhbk6" event={"ID":"83820f81-bf79-42be-9f7f-5a67edb260ff","Type":"ContainerDied","Data":"c4762fbda7306250fab7b9364cee72c58f40a32c4ed3ef3479cb4d579d5c1000"} Dec 16 12:16:57 crc kubenswrapper[4805]: I1216 12:16:57.018363 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" podUID="7d8752f0-5469-43e4-9284-8dd712bfd63f" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.157:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:16:57 crc kubenswrapper[4805]: I1216 12:16:57.062491 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6d8fcdbb66-gmhjg" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 16 12:16:57 crc kubenswrapper[4805]: I1216 12:16:57.063064 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:16:58 crc kubenswrapper[4805]: I1216 12:16:58.697064 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 16 12:17:00 crc kubenswrapper[4805]: I1216 12:17:00.478383 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" podUID="7d8752f0-5469-43e4-9284-8dd712bfd63f" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.157:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:17:01 crc kubenswrapper[4805]: I1216 12:17:01.179598 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" podUID="7d8752f0-5469-43e4-9284-8dd712bfd63f" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.157:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:17:01 crc kubenswrapper[4805]: I1216 12:17:01.498368 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:17:01 crc kubenswrapper[4805]: I1216 12:17:01.498366 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:17:02 crc kubenswrapper[4805]: I1216 12:17:02.027519 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6c96b7bb8b-r7k4g" podUID="7d8752f0-5469-43e4-9284-8dd712bfd63f" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.157:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.077671 4805 generic.go:334] "Generic (PLEG): container finished" podID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerID="985f88c1f9a128dbb9319d7a54a3e3a7b4e72ec8f5b8e705eb481b82a7cd4102" exitCode=137 Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.077779 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d8fcdbb66-gmhjg" event={"ID":"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c","Type":"ContainerDied","Data":"985f88c1f9a128dbb9319d7a54a3e3a7b4e72ec8f5b8e705eb481b82a7cd4102"} Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.078578 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d8fcdbb66-gmhjg" event={"ID":"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c","Type":"ContainerDied","Data":"7dec06c079bc1453f05ec66edf5d8526484e7eae1482b2a299e45c815fd2aba2"} Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.078658 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dec06c079bc1453f05ec66edf5d8526484e7eae1482b2a299e45c815fd2aba2" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.114982 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.209601 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-logs\") pod \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.209675 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-tls-certs\") pod \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.209765 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-secret-key\") pod \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.209823 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-config-data\") pod \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.209885 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndh4p\" (UniqueName: \"kubernetes.io/projected/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-kube-api-access-ndh4p\") pod \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.209915 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-combined-ca-bundle\") pod \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.209944 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-scripts\") pod \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\" (UID: \"2996ec40-b3d5-470f-b7fa-9ba261cb6e0c\") " Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.210707 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-logs" (OuterVolumeSpecName: "logs") pod "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" (UID: "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.218613 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" (UID: "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.220671 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-kube-api-access-ndh4p" (OuterVolumeSpecName: "kube-api-access-ndh4p") pod "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" (UID: "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c"). InnerVolumeSpecName "kube-api-access-ndh4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.250643 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-scripts" (OuterVolumeSpecName: "scripts") pod "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" (UID: "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.252967 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" (UID: "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.257130 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-config-data" (OuterVolumeSpecName: "config-data") pod "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" (UID: "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.281908 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" (UID: "2996ec40-b3d5-470f-b7fa-9ba261cb6e0c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.312213 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.312278 4805 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.312295 4805 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.312307 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.312321 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndh4p\" (UniqueName: \"kubernetes.io/projected/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-kube-api-access-ndh4p\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.312333 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.312345 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.315930 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-58c65944b9-fbmdw"] Dec 16 12:17:04 crc kubenswrapper[4805]: E1216 12:17:04.316364 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaba3492-18c7-49bd-90d6-7429d516cb3b" containerName="init" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.316382 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaba3492-18c7-49bd-90d6-7429d516cb3b" containerName="init" Dec 16 12:17:04 crc kubenswrapper[4805]: E1216 12:17:04.316403 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaba3492-18c7-49bd-90d6-7429d516cb3b" containerName="dnsmasq-dns" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.316411 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaba3492-18c7-49bd-90d6-7429d516cb3b" containerName="dnsmasq-dns" Dec 16 12:17:04 crc kubenswrapper[4805]: E1216 12:17:04.316429 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.316435 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon" Dec 16 12:17:04 crc kubenswrapper[4805]: E1216 12:17:04.316449 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon-log" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.316455 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon-log" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.316620 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.316635 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaba3492-18c7-49bd-90d6-7429d516cb3b" containerName="dnsmasq-dns" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.316647 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" containerName="horizon-log" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.317564 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.320965 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.321170 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.321370 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.366281 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58c65944b9-fbmdw"] Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.424756 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-internal-tls-certs\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.425276 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-combined-ca-bundle\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.425467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-public-tls-certs\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.425613 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c59b441a-f0f3-44c5-a0b8-42f00c60da72-log-httpd\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.425945 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7n4f\" (UniqueName: \"kubernetes.io/projected/c59b441a-f0f3-44c5-a0b8-42f00c60da72-kube-api-access-n7n4f\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.425972 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-config-data\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.426219 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c59b441a-f0f3-44c5-a0b8-42f00c60da72-etc-swift\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.426540 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c59b441a-f0f3-44c5-a0b8-42f00c60da72-run-httpd\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.528371 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-combined-ca-bundle\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.528743 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-public-tls-certs\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.528854 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c59b441a-f0f3-44c5-a0b8-42f00c60da72-log-httpd\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.529040 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7n4f\" (UniqueName: \"kubernetes.io/projected/c59b441a-f0f3-44c5-a0b8-42f00c60da72-kube-api-access-n7n4f\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.529169 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-config-data\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.529355 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c59b441a-f0f3-44c5-a0b8-42f00c60da72-etc-swift\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.529506 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c59b441a-f0f3-44c5-a0b8-42f00c60da72-run-httpd\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.529751 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-internal-tls-certs\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.530087 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c59b441a-f0f3-44c5-a0b8-42f00c60da72-run-httpd\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.530249 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c59b441a-f0f3-44c5-a0b8-42f00c60da72-log-httpd\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.533275 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-public-tls-certs\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.533332 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c59b441a-f0f3-44c5-a0b8-42f00c60da72-etc-swift\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.534269 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-config-data\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.534377 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-combined-ca-bundle\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.537954 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c59b441a-f0f3-44c5-a0b8-42f00c60da72-internal-tls-certs\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.550911 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7n4f\" (UniqueName: \"kubernetes.io/projected/c59b441a-f0f3-44c5-a0b8-42f00c60da72-kube-api-access-n7n4f\") pod \"swift-proxy-58c65944b9-fbmdw\" (UID: \"c59b441a-f0f3-44c5-a0b8-42f00c60da72\") " pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.587103 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.588876 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="ceilometer-central-agent" containerID="cri-o://931670af68babbfee261e142cb86cac906c0c9a9ea64134ade4fb7d0a0e3c0c3" gracePeriod=30 Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.588962 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="ceilometer-notification-agent" containerID="cri-o://016121b0bc4f0bb282bb62dce5a20f5d6943760c34026bf99981e0592c0dae6a" gracePeriod=30 Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.588971 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="sg-core" containerID="cri-o://f22085a138dff8c979e5ba861a288a8d56e982e940f4a31e2318f5e706a9b221" gracePeriod=30 Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.588913 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="proxy-httpd" containerID="cri-o://71a2103655edc4a8a6550f857ae913ee1e0b6bf530a88798caa5255a70a8d68e" gracePeriod=30 Dec 16 12:17:04 crc kubenswrapper[4805]: I1216 12:17:04.672963 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.104507 4805 generic.go:334] "Generic (PLEG): container finished" podID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerID="71a2103655edc4a8a6550f857ae913ee1e0b6bf530a88798caa5255a70a8d68e" exitCode=0 Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.104736 4805 generic.go:334] "Generic (PLEG): container finished" podID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerID="f22085a138dff8c979e5ba861a288a8d56e982e940f4a31e2318f5e706a9b221" exitCode=2 Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.104780 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2","Type":"ContainerDied","Data":"71a2103655edc4a8a6550f857ae913ee1e0b6bf530a88798caa5255a70a8d68e"} Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.104864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2","Type":"ContainerDied","Data":"f22085a138dff8c979e5ba861a288a8d56e982e940f4a31e2318f5e706a9b221"} Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.106194 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d8fcdbb66-gmhjg" Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.107424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"61335104-9325-4589-bcbb-fb19f4273dc2","Type":"ContainerStarted","Data":"d4631164278a86c57ffe46d485c3eea90e15a7a84673e9286a120c275a18b276"} Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.135966 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=5.072585958 podStartE2EDuration="23.135944647s" podCreationTimestamp="2025-12-16 12:16:42 +0000 UTC" firstStartedPulling="2025-12-16 12:16:46.04437403 +0000 UTC m=+1279.762631835" lastFinishedPulling="2025-12-16 12:17:04.107732719 +0000 UTC m=+1297.825990524" observedRunningTime="2025-12-16 12:17:05.130913423 +0000 UTC m=+1298.849171228" watchObservedRunningTime="2025-12-16 12:17:05.135944647 +0000 UTC m=+1298.854202462" Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.188257 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d8fcdbb66-gmhjg"] Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.200760 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d8fcdbb66-gmhjg"] Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.458063 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58c65944b9-fbmdw"] Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.859002 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:55004->10.217.0.156:9311: read: connection reset by peer" Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.860043 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:55006->10.217.0.156:9311: read: connection reset by peer" Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.860402 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": dial tcp 10.217.0.156:9311: connect: connection refused" Dec 16 12:17:05 crc kubenswrapper[4805]: I1216 12:17:05.860466 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9975d546-lhbk6" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": dial tcp 10.217.0.156:9311: connect: connection refused" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.130248 4805 generic.go:334] "Generic (PLEG): container finished" podID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerID="fcf6edfd61ef6ee5f6531b8e36a426e6722ba33083c4b35e33e2ada3d3466c00" exitCode=0 Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.130322 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9975d546-lhbk6" event={"ID":"83820f81-bf79-42be-9f7f-5a67edb260ff","Type":"ContainerDied","Data":"fcf6edfd61ef6ee5f6531b8e36a426e6722ba33083c4b35e33e2ada3d3466c00"} Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.136471 4805 generic.go:334] "Generic (PLEG): container finished" podID="8bbda936-b96e-491c-9d8c-1e4595a42566" containerID="577888b0a6ff989ca1016b9930991e75bc4527c3bbae56a147fa1e7bcda1b549" exitCode=0 Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.136563 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sljmv" event={"ID":"8bbda936-b96e-491c-9d8c-1e4595a42566","Type":"ContainerDied","Data":"577888b0a6ff989ca1016b9930991e75bc4527c3bbae56a147fa1e7bcda1b549"} Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.149222 4805 generic.go:334] "Generic (PLEG): container finished" podID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerID="016121b0bc4f0bb282bb62dce5a20f5d6943760c34026bf99981e0592c0dae6a" exitCode=0 Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.149265 4805 generic.go:334] "Generic (PLEG): container finished" podID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerID="931670af68babbfee261e142cb86cac906c0c9a9ea64134ade4fb7d0a0e3c0c3" exitCode=0 Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.149365 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2","Type":"ContainerDied","Data":"016121b0bc4f0bb282bb62dce5a20f5d6943760c34026bf99981e0592c0dae6a"} Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.149426 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2","Type":"ContainerDied","Data":"931670af68babbfee261e142cb86cac906c0c9a9ea64134ade4fb7d0a0e3c0c3"} Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.162644 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58c65944b9-fbmdw" event={"ID":"c59b441a-f0f3-44c5-a0b8-42f00c60da72","Type":"ContainerStarted","Data":"8522669d6edfb0d84d755a5bd01f8cbb2d1fa88d79a798950d9274a46d449e41"} Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.162685 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58c65944b9-fbmdw" event={"ID":"c59b441a-f0f3-44c5-a0b8-42f00c60da72","Type":"ContainerStarted","Data":"160e7d6cfe312f6061dfd935e26f13c81194f9f05ca06c9df05010f410643ca5"} Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.362019 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.481451 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data-custom\") pod \"83820f81-bf79-42be-9f7f-5a67edb260ff\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.481566 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data\") pod \"83820f81-bf79-42be-9f7f-5a67edb260ff\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.481683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83820f81-bf79-42be-9f7f-5a67edb260ff-logs\") pod \"83820f81-bf79-42be-9f7f-5a67edb260ff\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.481762 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qrcl\" (UniqueName: \"kubernetes.io/projected/83820f81-bf79-42be-9f7f-5a67edb260ff-kube-api-access-8qrcl\") pod \"83820f81-bf79-42be-9f7f-5a67edb260ff\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.481790 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-combined-ca-bundle\") pod \"83820f81-bf79-42be-9f7f-5a67edb260ff\" (UID: \"83820f81-bf79-42be-9f7f-5a67edb260ff\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.484067 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83820f81-bf79-42be-9f7f-5a67edb260ff-logs" (OuterVolumeSpecName: "logs") pod "83820f81-bf79-42be-9f7f-5a67edb260ff" (UID: "83820f81-bf79-42be-9f7f-5a67edb260ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.488891 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83820f81-bf79-42be-9f7f-5a67edb260ff-kube-api-access-8qrcl" (OuterVolumeSpecName: "kube-api-access-8qrcl") pod "83820f81-bf79-42be-9f7f-5a67edb260ff" (UID: "83820f81-bf79-42be-9f7f-5a67edb260ff"). InnerVolumeSpecName "kube-api-access-8qrcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.489282 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "83820f81-bf79-42be-9f7f-5a67edb260ff" (UID: "83820f81-bf79-42be-9f7f-5a67edb260ff"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.514979 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83820f81-bf79-42be-9f7f-5a67edb260ff" (UID: "83820f81-bf79-42be-9f7f-5a67edb260ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.539372 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2996ec40-b3d5-470f-b7fa-9ba261cb6e0c" path="/var/lib/kubelet/pods/2996ec40-b3d5-470f-b7fa-9ba261cb6e0c/volumes" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.548957 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.575386 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data" (OuterVolumeSpecName: "config-data") pod "83820f81-bf79-42be-9f7f-5a67edb260ff" (UID: "83820f81-bf79-42be-9f7f-5a67edb260ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.584404 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qrcl\" (UniqueName: \"kubernetes.io/projected/83820f81-bf79-42be-9f7f-5a67edb260ff-kube-api-access-8qrcl\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.584434 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.584467 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.584476 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83820f81-bf79-42be-9f7f-5a67edb260ff-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.584485 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83820f81-bf79-42be-9f7f-5a67edb260ff-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.685723 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-combined-ca-bundle\") pod \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.685862 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-log-httpd\") pod \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.685900 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-scripts\") pod \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.685933 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ft6t\" (UniqueName: \"kubernetes.io/projected/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-kube-api-access-6ft6t\") pod \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.685960 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-config-data\") pod \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.686072 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-run-httpd\") pod \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.686097 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-sg-core-conf-yaml\") pod \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\" (UID: \"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2\") " Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.687081 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" (UID: "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.687516 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" (UID: "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.692663 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-scripts" (OuterVolumeSpecName: "scripts") pod "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" (UID: "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.693913 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-kube-api-access-6ft6t" (OuterVolumeSpecName: "kube-api-access-6ft6t") pod "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" (UID: "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2"). InnerVolumeSpecName "kube-api-access-6ft6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.800891 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.801161 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.801226 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ft6t\" (UniqueName: \"kubernetes.io/projected/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-kube-api-access-6ft6t\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.801365 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.828436 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" (UID: "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.829787 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" (UID: "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.900447 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-config-data" (OuterVolumeSpecName: "config-data") pod "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" (UID: "c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.902620 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.902704 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:06 crc kubenswrapper[4805]: I1216 12:17:06.902775 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.172065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9975d546-lhbk6" event={"ID":"83820f81-bf79-42be-9f7f-5a67edb260ff","Type":"ContainerDied","Data":"b90cde3743d33668186decc90873490014061e8d11c4312b4e1ea96f188d7482"} Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.172125 4805 scope.go:117] "RemoveContainer" containerID="fcf6edfd61ef6ee5f6531b8e36a426e6722ba33083c4b35e33e2ada3d3466c00" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.172333 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9975d546-lhbk6" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.175387 4805 generic.go:334] "Generic (PLEG): container finished" podID="aa98eb47-6335-4544-b5d6-718b55075000" containerID="c8c5f0d29b119f9ab184cf85bddef944ae6359182ffbe96fd8612c9df390f09b" exitCode=0 Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.175478 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9v2nl" event={"ID":"aa98eb47-6335-4544-b5d6-718b55075000","Type":"ContainerDied","Data":"c8c5f0d29b119f9ab184cf85bddef944ae6359182ffbe96fd8612c9df390f09b"} Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.194559 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2","Type":"ContainerDied","Data":"a37c176ee101f8ef3622a06499197c594d98581d6729b661b33c6a3c975b6805"} Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.194918 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.201210 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58c65944b9-fbmdw" event={"ID":"c59b441a-f0f3-44c5-a0b8-42f00c60da72","Type":"ContainerStarted","Data":"48c8c02a06e55c927a89f3f3ebe12d39701dee485e5a617b4a7af574c46cd18d"} Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.217460 4805 scope.go:117] "RemoveContainer" containerID="c4762fbda7306250fab7b9364cee72c58f40a32c4ed3ef3479cb4d579d5c1000" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.244914 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b9975d546-lhbk6"] Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.276960 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b9975d546-lhbk6"] Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.296471 4805 scope.go:117] "RemoveContainer" containerID="71a2103655edc4a8a6550f857ae913ee1e0b6bf530a88798caa5255a70a8d68e" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.322905 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-58c65944b9-fbmdw" podStartSLOduration=3.322882787 podStartE2EDuration="3.322882787s" podCreationTimestamp="2025-12-16 12:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:07.297939602 +0000 UTC m=+1301.016197407" watchObservedRunningTime="2025-12-16 12:17:07.322882787 +0000 UTC m=+1301.041140602" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.339465 4805 scope.go:117] "RemoveContainer" containerID="f22085a138dff8c979e5ba861a288a8d56e982e940f4a31e2318f5e706a9b221" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.345693 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.378221 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.385966 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:07 crc kubenswrapper[4805]: E1216 12:17:07.386767 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="sg-core" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.386789 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="sg-core" Dec 16 12:17:07 crc kubenswrapper[4805]: E1216 12:17:07.386812 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="ceilometer-notification-agent" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.386820 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="ceilometer-notification-agent" Dec 16 12:17:07 crc kubenswrapper[4805]: E1216 12:17:07.386844 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.386854 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api" Dec 16 12:17:07 crc kubenswrapper[4805]: E1216 12:17:07.386869 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api-log" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.386888 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api-log" Dec 16 12:17:07 crc kubenswrapper[4805]: E1216 12:17:07.386912 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="ceilometer-central-agent" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.386920 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="ceilometer-central-agent" Dec 16 12:17:07 crc kubenswrapper[4805]: E1216 12:17:07.386949 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="proxy-httpd" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.386959 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="proxy-httpd" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.387267 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api-log" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.387288 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="ceilometer-central-agent" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.387303 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="sg-core" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.387330 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="ceilometer-notification-agent" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.387348 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" containerName="proxy-httpd" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.387358 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" containerName="barbican-api" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.392657 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.408408 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.411400 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.413427 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-log-httpd\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.413499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh2n8\" (UniqueName: \"kubernetes.io/projected/240e88fd-9bca-4893-8ed9-3be361b7e220-kube-api-access-zh2n8\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.413541 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.413793 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-run-httpd\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.413819 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-config-data\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.413839 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-scripts\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.413856 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.428548 4805 scope.go:117] "RemoveContainer" containerID="016121b0bc4f0bb282bb62dce5a20f5d6943760c34026bf99981e0592c0dae6a" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.491386 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.518173 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-log-httpd\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.518297 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh2n8\" (UniqueName: \"kubernetes.io/projected/240e88fd-9bca-4893-8ed9-3be361b7e220-kube-api-access-zh2n8\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.518348 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.518404 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-run-httpd\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.518439 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-config-data\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.518470 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-scripts\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.518495 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.521222 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-log-httpd\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.522966 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-run-httpd\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.526080 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.527333 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.564290 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-scripts\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.567027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-config-data\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.585914 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh2n8\" (UniqueName: \"kubernetes.io/projected/240e88fd-9bca-4893-8ed9-3be361b7e220-kube-api-access-zh2n8\") pod \"ceilometer-0\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.609758 4805 scope.go:117] "RemoveContainer" containerID="931670af68babbfee261e142cb86cac906c0c9a9ea64134ade4fb7d0a0e3c0c3" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.634283 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sljmv" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.722604 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-config\") pod \"8bbda936-b96e-491c-9d8c-1e4595a42566\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.722698 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdngh\" (UniqueName: \"kubernetes.io/projected/8bbda936-b96e-491c-9d8c-1e4595a42566-kube-api-access-qdngh\") pod \"8bbda936-b96e-491c-9d8c-1e4595a42566\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.722809 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-combined-ca-bundle\") pod \"8bbda936-b96e-491c-9d8c-1e4595a42566\" (UID: \"8bbda936-b96e-491c-9d8c-1e4595a42566\") " Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.728791 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bbda936-b96e-491c-9d8c-1e4595a42566-kube-api-access-qdngh" (OuterVolumeSpecName: "kube-api-access-qdngh") pod "8bbda936-b96e-491c-9d8c-1e4595a42566" (UID: "8bbda936-b96e-491c-9d8c-1e4595a42566"). InnerVolumeSpecName "kube-api-access-qdngh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.781407 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bbda936-b96e-491c-9d8c-1e4595a42566" (UID: "8bbda936-b96e-491c-9d8c-1e4595a42566"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.781568 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-config" (OuterVolumeSpecName: "config") pod "8bbda936-b96e-491c-9d8c-1e4595a42566" (UID: "8bbda936-b96e-491c-9d8c-1e4595a42566"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.810592 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.825224 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.825256 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdngh\" (UniqueName: \"kubernetes.io/projected/8bbda936-b96e-491c-9d8c-1e4595a42566-kube-api-access-qdngh\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:07 crc kubenswrapper[4805]: I1216 12:17:07.825268 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbda936-b96e-491c-9d8c-1e4595a42566-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.213898 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sljmv" event={"ID":"8bbda936-b96e-491c-9d8c-1e4595a42566","Type":"ContainerDied","Data":"2cdb3041af2ef3fea1cfd0ee4f396191cf110fc70e8f3d80b74f6c90a7562da4"} Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.214350 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cdb3041af2ef3fea1cfd0ee4f396191cf110fc70e8f3d80b74f6c90a7562da4" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.214258 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sljmv" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.216264 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.216470 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.382897 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.539600 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83820f81-bf79-42be-9f7f-5a67edb260ff" path="/var/lib/kubelet/pods/83820f81-bf79-42be-9f7f-5a67edb260ff/volumes" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.540281 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2" path="/var/lib/kubelet/pods/c4c1e20b-c4ee-45b3-bfa2-d2aa13815ed2/volumes" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.593282 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mvtlh"] Dec 16 12:17:08 crc kubenswrapper[4805]: E1216 12:17:08.597873 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbda936-b96e-491c-9d8c-1e4595a42566" containerName="neutron-db-sync" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.597907 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbda936-b96e-491c-9d8c-1e4595a42566" containerName="neutron-db-sync" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.598191 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bbda936-b96e-491c-9d8c-1e4595a42566" containerName="neutron-db-sync" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.605759 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.628672 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mvtlh"] Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.647833 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.647962 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.648041 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.648086 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.648150 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgdwc\" (UniqueName: \"kubernetes.io/projected/1b539a01-aa65-4642-84e4-06ca8783f813-kube-api-access-zgdwc\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.648180 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-config\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.743419 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59856f68c6-mltb2"] Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.745469 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.754612 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tk2md" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.755335 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.756009 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.757343 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.757390 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.757410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.757441 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgdwc\" (UniqueName: \"kubernetes.io/projected/1b539a01-aa65-4642-84e4-06ca8783f813-kube-api-access-zgdwc\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.757465 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-config\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.758959 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-config\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.757011 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.755385 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.755424 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.764428 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.764442 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.765000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.785928 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59856f68c6-mltb2"] Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.813116 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgdwc\" (UniqueName: \"kubernetes.io/projected/1b539a01-aa65-4642-84e4-06ca8783f813-kube-api-access-zgdwc\") pod \"dnsmasq-dns-75c8ddd69c-mvtlh\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.868796 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-config\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.868974 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mp2\" (UniqueName: \"kubernetes.io/projected/e064fe82-3a6a-490b-a123-38a56ae4234c-kube-api-access-d9mp2\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.869024 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-httpd-config\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.869053 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-ovndb-tls-certs\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.869097 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-combined-ca-bundle\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.948533 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.957015 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.971777 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-config-data\") pod \"aa98eb47-6335-4544-b5d6-718b55075000\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.971873 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x8xx\" (UniqueName: \"kubernetes.io/projected/aa98eb47-6335-4544-b5d6-718b55075000-kube-api-access-2x8xx\") pod \"aa98eb47-6335-4544-b5d6-718b55075000\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.972004 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa98eb47-6335-4544-b5d6-718b55075000-etc-machine-id\") pod \"aa98eb47-6335-4544-b5d6-718b55075000\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.972034 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-combined-ca-bundle\") pod \"aa98eb47-6335-4544-b5d6-718b55075000\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.972053 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-scripts\") pod \"aa98eb47-6335-4544-b5d6-718b55075000\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.972078 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-db-sync-config-data\") pod \"aa98eb47-6335-4544-b5d6-718b55075000\" (UID: \"aa98eb47-6335-4544-b5d6-718b55075000\") " Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.972438 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa98eb47-6335-4544-b5d6-718b55075000-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aa98eb47-6335-4544-b5d6-718b55075000" (UID: "aa98eb47-6335-4544-b5d6-718b55075000"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.972842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mp2\" (UniqueName: \"kubernetes.io/projected/e064fe82-3a6a-490b-a123-38a56ae4234c-kube-api-access-d9mp2\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.972876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-httpd-config\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.972891 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-ovndb-tls-certs\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.972927 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-combined-ca-bundle\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.972980 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-config\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.973037 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa98eb47-6335-4544-b5d6-718b55075000-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.980958 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-ovndb-tls-certs\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:08 crc kubenswrapper[4805]: I1216 12:17:08.981405 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-httpd-config\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.014760 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "aa98eb47-6335-4544-b5d6-718b55075000" (UID: "aa98eb47-6335-4544-b5d6-718b55075000"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.015635 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-combined-ca-bundle\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.024366 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-scripts" (OuterVolumeSpecName: "scripts") pod "aa98eb47-6335-4544-b5d6-718b55075000" (UID: "aa98eb47-6335-4544-b5d6-718b55075000"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.024938 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa98eb47-6335-4544-b5d6-718b55075000-kube-api-access-2x8xx" (OuterVolumeSpecName: "kube-api-access-2x8xx") pod "aa98eb47-6335-4544-b5d6-718b55075000" (UID: "aa98eb47-6335-4544-b5d6-718b55075000"). InnerVolumeSpecName "kube-api-access-2x8xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.025397 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mp2\" (UniqueName: \"kubernetes.io/projected/e064fe82-3a6a-490b-a123-38a56ae4234c-kube-api-access-d9mp2\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.025855 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-config\") pod \"neutron-59856f68c6-mltb2\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.074496 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.075292 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.075386 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x8xx\" (UniqueName: \"kubernetes.io/projected/aa98eb47-6335-4544-b5d6-718b55075000-kube-api-access-2x8xx\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.081292 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.096359 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa98eb47-6335-4544-b5d6-718b55075000" (UID: "aa98eb47-6335-4544-b5d6-718b55075000"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.145186 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-config-data" (OuterVolumeSpecName: "config-data") pod "aa98eb47-6335-4544-b5d6-718b55075000" (UID: "aa98eb47-6335-4544-b5d6-718b55075000"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.177288 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.177334 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa98eb47-6335-4544-b5d6-718b55075000-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.275873 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9v2nl" event={"ID":"aa98eb47-6335-4544-b5d6-718b55075000","Type":"ContainerDied","Data":"ba702690034b498c0522e55cb045496076531450a67535e9dd61d2d39c024375"} Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.276189 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba702690034b498c0522e55cb045496076531450a67535e9dd61d2d39c024375" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.276263 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9v2nl" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.290709 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"240e88fd-9bca-4893-8ed9-3be361b7e220","Type":"ContainerStarted","Data":"930d6fbc28bddd3bd7092d04b4b41b8e6db9e15bbc9cca09cd338c559cb684c3"} Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.776591 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 16 12:17:09 crc kubenswrapper[4805]: E1216 12:17:09.777644 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa98eb47-6335-4544-b5d6-718b55075000" containerName="cinder-db-sync" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.777809 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa98eb47-6335-4544-b5d6-718b55075000" containerName="cinder-db-sync" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.778396 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa98eb47-6335-4544-b5d6-718b55075000" containerName="cinder-db-sync" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.780155 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.803090 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.803088 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.803172 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ql6ss" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.803761 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.868175 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d319359d-f646-42d6-95ac-4a64f4177e1e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.868286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-scripts\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.868332 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.868504 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhc2w\" (UniqueName: \"kubernetes.io/projected/d319359d-f646-42d6-95ac-4a64f4177e1e-kube-api-access-fhc2w\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.868545 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.868587 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.968608 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mvtlh"] Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.970080 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d319359d-f646-42d6-95ac-4a64f4177e1e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.970312 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-scripts\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.970395 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.970523 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhc2w\" (UniqueName: \"kubernetes.io/projected/d319359d-f646-42d6-95ac-4a64f4177e1e-kube-api-access-fhc2w\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.970591 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.970671 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.971314 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d319359d-f646-42d6-95ac-4a64f4177e1e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:09 crc kubenswrapper[4805]: I1216 12:17:09.985623 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-scripts\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.000462 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.000889 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.012206 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.025353 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mvtlh"] Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.030734 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhc2w\" (UniqueName: \"kubernetes.io/projected/d319359d-f646-42d6-95ac-4a64f4177e1e-kube-api-access-fhc2w\") pod \"cinder-scheduler-0\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.111127 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q284g"] Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.113330 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.125240 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q284g"] Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.137055 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.174241 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.189435 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-config\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.189510 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8ckb\" (UniqueName: \"kubernetes.io/projected/cf992c26-81d6-40e1-8b60-58c5fed5db64-kube-api-access-k8ckb\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.189560 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-svc\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.189581 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.189641 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.189670 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.293188 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.293244 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.293298 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-config\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.293352 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8ckb\" (UniqueName: \"kubernetes.io/projected/cf992c26-81d6-40e1-8b60-58c5fed5db64-kube-api-access-k8ckb\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.293392 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-svc\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.293408 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.294278 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.294798 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.295342 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.295858 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-config\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.295935 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.297960 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.306724 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-svc\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.308451 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.348329 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8ckb\" (UniqueName: \"kubernetes.io/projected/cf992c26-81d6-40e1-8b60-58c5fed5db64-kube-api-access-k8ckb\") pod \"dnsmasq-dns-5784cf869f-q284g\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.359542 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" event={"ID":"1b539a01-aa65-4642-84e4-06ca8783f813","Type":"ContainerStarted","Data":"7f4b9e791a304b291f94d57b6bd28bb63808dc21758d0ba44eeb32f84e4e57ba"} Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.370249 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.404225 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-scripts\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.404628 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.404808 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data-custom\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.404889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a01bee48-5529-47a8-9395-797b6113ce6a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.404999 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.405190 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a01bee48-5529-47a8-9395-797b6113ce6a-logs\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.405280 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7zx6\" (UniqueName: \"kubernetes.io/projected/a01bee48-5529-47a8-9395-797b6113ce6a-kube-api-access-n7zx6\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.652988 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.653557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a01bee48-5529-47a8-9395-797b6113ce6a-logs\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.653604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7zx6\" (UniqueName: \"kubernetes.io/projected/a01bee48-5529-47a8-9395-797b6113ce6a-kube-api-access-n7zx6\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.653650 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-scripts\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.653669 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.653699 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data-custom\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.653716 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a01bee48-5529-47a8-9395-797b6113ce6a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.653737 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.655796 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a01bee48-5529-47a8-9395-797b6113ce6a-logs\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.660872 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a01bee48-5529-47a8-9395-797b6113ce6a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.688113 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data-custom\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.688554 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-scripts\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.689485 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.692860 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:10 crc kubenswrapper[4805]: I1216 12:17:10.711841 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7zx6\" (UniqueName: \"kubernetes.io/projected/a01bee48-5529-47a8-9395-797b6113ce6a-kube-api-access-n7zx6\") pod \"cinder-api-0\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " pod="openstack/cinder-api-0" Dec 16 12:17:11 crc kubenswrapper[4805]: I1216 12:17:10.962058 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 16 12:17:11 crc kubenswrapper[4805]: I1216 12:17:11.320500 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:11 crc kubenswrapper[4805]: I1216 12:17:11.320788 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59856f68c6-mltb2"] Dec 16 12:17:11 crc kubenswrapper[4805]: I1216 12:17:11.610962 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58c65944b9-fbmdw" Dec 16 12:17:12 crc kubenswrapper[4805]: I1216 12:17:12.336465 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 16 12:17:12 crc kubenswrapper[4805]: I1216 12:17:12.381602 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q284g"] Dec 16 12:17:12 crc kubenswrapper[4805]: I1216 12:17:12.506671 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 16 12:17:12 crc kubenswrapper[4805]: I1216 12:17:12.563745 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d319359d-f646-42d6-95ac-4a64f4177e1e","Type":"ContainerStarted","Data":"466930c7f8db91d8df4733a4c8b1d40c241fbe29f08df9d60940a39a803b49d2"} Dec 16 12:17:12 crc kubenswrapper[4805]: I1216 12:17:12.563787 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q284g" event={"ID":"cf992c26-81d6-40e1-8b60-58c5fed5db64","Type":"ContainerStarted","Data":"3a61505c8346117b6f91d120d535b9b098fe6ea52580fc353e15ed5f71225e4a"} Dec 16 12:17:12 crc kubenswrapper[4805]: I1216 12:17:12.563804 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59856f68c6-mltb2" event={"ID":"e064fe82-3a6a-490b-a123-38a56ae4234c","Type":"ContainerStarted","Data":"fd7248f5e2c4a484412d9b980d0bc21147a02610a77774edd49804ecf384bf82"} Dec 16 12:17:13 crc kubenswrapper[4805]: I1216 12:17:13.585293 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59856f68c6-mltb2" event={"ID":"e064fe82-3a6a-490b-a123-38a56ae4234c","Type":"ContainerStarted","Data":"e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218"} Dec 16 12:17:13 crc kubenswrapper[4805]: I1216 12:17:13.588417 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a01bee48-5529-47a8-9395-797b6113ce6a","Type":"ContainerStarted","Data":"c41a162420950cbea14d4d4d0d01b0017fa026b4d63cbc7efae7138d09e6418c"} Dec 16 12:17:13 crc kubenswrapper[4805]: I1216 12:17:13.595662 4805 generic.go:334] "Generic (PLEG): container finished" podID="1b539a01-aa65-4642-84e4-06ca8783f813" containerID="7bf0094378e33149c30099b1d5d157d99389bf1cd34b608eac2f9b4407f84907" exitCode=0 Dec 16 12:17:13 crc kubenswrapper[4805]: I1216 12:17:13.595796 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" event={"ID":"1b539a01-aa65-4642-84e4-06ca8783f813","Type":"ContainerDied","Data":"7bf0094378e33149c30099b1d5d157d99389bf1cd34b608eac2f9b4407f84907"} Dec 16 12:17:13 crc kubenswrapper[4805]: I1216 12:17:13.621836 4805 generic.go:334] "Generic (PLEG): container finished" podID="cf992c26-81d6-40e1-8b60-58c5fed5db64" containerID="84dea3ad9d2e1f5c90a4d6f2c4dd6e5c859b9c223427dde8dadac313faea73f7" exitCode=0 Dec 16 12:17:13 crc kubenswrapper[4805]: I1216 12:17:13.621929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q284g" event={"ID":"cf992c26-81d6-40e1-8b60-58c5fed5db64","Type":"ContainerDied","Data":"84dea3ad9d2e1f5c90a4d6f2c4dd6e5c859b9c223427dde8dadac313faea73f7"} Dec 16 12:17:13 crc kubenswrapper[4805]: I1216 12:17:13.638002 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"240e88fd-9bca-4893-8ed9-3be361b7e220","Type":"ContainerStarted","Data":"b4392bb4d66d6d693f3904acf5dfb8b2f8564ead6405e5a086ff58f03714cc3a"} Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.055059 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q284g" event={"ID":"cf992c26-81d6-40e1-8b60-58c5fed5db64","Type":"ContainerStarted","Data":"b7b93caecb8940d7db3e715aeb53d588e8de71266f7fa26698e9c38e87ea49c0"} Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.057240 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.072535 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.085457 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59856f68c6-mltb2" event={"ID":"e064fe82-3a6a-490b-a123-38a56ae4234c","Type":"ContainerStarted","Data":"648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422"} Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.086408 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.142943 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdwc\" (UniqueName: \"kubernetes.io/projected/1b539a01-aa65-4642-84e4-06ca8783f813-kube-api-access-zgdwc\") pod \"1b539a01-aa65-4642-84e4-06ca8783f813\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.143900 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-config\") pod \"1b539a01-aa65-4642-84e4-06ca8783f813\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.155835 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-sb\") pod \"1b539a01-aa65-4642-84e4-06ca8783f813\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.155987 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-nb\") pod \"1b539a01-aa65-4642-84e4-06ca8783f813\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.156230 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-swift-storage-0\") pod \"1b539a01-aa65-4642-84e4-06ca8783f813\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.156332 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-svc\") pod \"1b539a01-aa65-4642-84e4-06ca8783f813\" (UID: \"1b539a01-aa65-4642-84e4-06ca8783f813\") " Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.155654 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b539a01-aa65-4642-84e4-06ca8783f813-kube-api-access-zgdwc" (OuterVolumeSpecName: "kube-api-access-zgdwc") pod "1b539a01-aa65-4642-84e4-06ca8783f813" (UID: "1b539a01-aa65-4642-84e4-06ca8783f813"). InnerVolumeSpecName "kube-api-access-zgdwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.238589 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-config" (OuterVolumeSpecName: "config") pod "1b539a01-aa65-4642-84e4-06ca8783f813" (UID: "1b539a01-aa65-4642-84e4-06ca8783f813"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.258392 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b539a01-aa65-4642-84e4-06ca8783f813" (UID: "1b539a01-aa65-4642-84e4-06ca8783f813"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.260777 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b539a01-aa65-4642-84e4-06ca8783f813" (UID: "1b539a01-aa65-4642-84e4-06ca8783f813"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.271541 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.275228 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdwc\" (UniqueName: \"kubernetes.io/projected/1b539a01-aa65-4642-84e4-06ca8783f813-kube-api-access-zgdwc\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.275493 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.275589 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.276328 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1b539a01-aa65-4642-84e4-06ca8783f813" (UID: "1b539a01-aa65-4642-84e4-06ca8783f813"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.297857 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-q284g" podStartSLOduration=6.297822883 podStartE2EDuration="6.297822883s" podCreationTimestamp="2025-12-16 12:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:15.141547771 +0000 UTC m=+1308.859805576" watchObservedRunningTime="2025-12-16 12:17:15.297822883 +0000 UTC m=+1309.016080708" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.314276 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b539a01-aa65-4642-84e4-06ca8783f813" (UID: "1b539a01-aa65-4642-84e4-06ca8783f813"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.363033 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59856f68c6-mltb2" podStartSLOduration=7.363011662 podStartE2EDuration="7.363011662s" podCreationTimestamp="2025-12-16 12:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:15.306053329 +0000 UTC m=+1309.024311134" watchObservedRunningTime="2025-12-16 12:17:15.363011662 +0000 UTC m=+1309.081269487" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.377969 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:15 crc kubenswrapper[4805]: I1216 12:17:15.378526 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b539a01-aa65-4642-84e4-06ca8783f813-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:16 crc kubenswrapper[4805]: I1216 12:17:16.119242 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" Dec 16 12:17:16 crc kubenswrapper[4805]: I1216 12:17:16.119374 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-mvtlh" event={"ID":"1b539a01-aa65-4642-84e4-06ca8783f813","Type":"ContainerDied","Data":"7f4b9e791a304b291f94d57b6bd28bb63808dc21758d0ba44eeb32f84e4e57ba"} Dec 16 12:17:16 crc kubenswrapper[4805]: I1216 12:17:16.120364 4805 scope.go:117] "RemoveContainer" containerID="7bf0094378e33149c30099b1d5d157d99389bf1cd34b608eac2f9b4407f84907" Dec 16 12:17:16 crc kubenswrapper[4805]: I1216 12:17:16.126331 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"240e88fd-9bca-4893-8ed9-3be361b7e220","Type":"ContainerStarted","Data":"83537b3a80b5fefac225915ab72a7fbd4c12a7e5a26dadd67e61c98e81868e3f"} Dec 16 12:17:16 crc kubenswrapper[4805]: I1216 12:17:16.136101 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 16 12:17:16 crc kubenswrapper[4805]: I1216 12:17:16.286887 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mvtlh"] Dec 16 12:17:16 crc kubenswrapper[4805]: I1216 12:17:16.300566 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-mvtlh"] Dec 16 12:17:16 crc kubenswrapper[4805]: I1216 12:17:16.608756 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b539a01-aa65-4642-84e4-06ca8783f813" path="/var/lib/kubelet/pods/1b539a01-aa65-4642-84e4-06ca8783f813/volumes" Dec 16 12:17:16 crc kubenswrapper[4805]: I1216 12:17:16.906482 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:17:17 crc kubenswrapper[4805]: I1216 12:17:17.572816 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-775868cbc4-vvjnt" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.163061 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d319359d-f646-42d6-95ac-4a64f4177e1e","Type":"ContainerStarted","Data":"0d8af945ccef93ed8b85c8065f8935c78efb794a78b1f9ca470528f1f16bcca9"} Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.165065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a01bee48-5529-47a8-9395-797b6113ce6a","Type":"ContainerStarted","Data":"945ad6b3912b62c706f5579aa7f4cfec61398826fd636b037775fc05afe86c14"} Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.190648 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-57fbfd7dcc-lq2v9"] Dec 16 12:17:18 crc kubenswrapper[4805]: E1216 12:17:18.191090 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b539a01-aa65-4642-84e4-06ca8783f813" containerName="init" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.191108 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b539a01-aa65-4642-84e4-06ca8783f813" containerName="init" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.216369 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b539a01-aa65-4642-84e4-06ca8783f813" containerName="init" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.217385 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.230521 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.231221 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.235548 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57fbfd7dcc-lq2v9"] Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.372728 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-config\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.372767 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-combined-ca-bundle\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.372808 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw8mx\" (UniqueName: \"kubernetes.io/projected/26f9167a-aa3e-4381-aac0-e0aaea7449a8-kube-api-access-cw8mx\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.372915 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-httpd-config\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.373008 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-public-tls-certs\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.373133 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-ovndb-tls-certs\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.373213 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-internal-tls-certs\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.480168 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-config\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.480492 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-combined-ca-bundle\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.480524 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw8mx\" (UniqueName: \"kubernetes.io/projected/26f9167a-aa3e-4381-aac0-e0aaea7449a8-kube-api-access-cw8mx\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.480598 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-httpd-config\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.480645 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-public-tls-certs\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.480707 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-ovndb-tls-certs\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.480751 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-internal-tls-certs\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.500513 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-config\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.509509 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-ovndb-tls-certs\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.511111 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-public-tls-certs\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.513888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-internal-tls-certs\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.521965 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw8mx\" (UniqueName: \"kubernetes.io/projected/26f9167a-aa3e-4381-aac0-e0aaea7449a8-kube-api-access-cw8mx\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.540633 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-httpd-config\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.541589 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f9167a-aa3e-4381-aac0-e0aaea7449a8-combined-ca-bundle\") pod \"neutron-57fbfd7dcc-lq2v9\" (UID: \"26f9167a-aa3e-4381-aac0-e0aaea7449a8\") " pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:18 crc kubenswrapper[4805]: I1216 12:17:18.566808 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:19 crc kubenswrapper[4805]: I1216 12:17:19.847477 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57fbfd7dcc-lq2v9"] Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.229366 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d319359d-f646-42d6-95ac-4a64f4177e1e","Type":"ContainerStarted","Data":"4de681c0a3b8611566616eefd51ad6e67e4fe891a8d5a217a864876cf94e07ec"} Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.233011 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a01bee48-5529-47a8-9395-797b6113ce6a","Type":"ContainerStarted","Data":"5f8fbfe80bc83ff434a0bb7c2285f4d04dcdb54ed43f418d2663bb49fe1376af"} Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.233278 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a01bee48-5529-47a8-9395-797b6113ce6a" containerName="cinder-api-log" containerID="cri-o://945ad6b3912b62c706f5579aa7f4cfec61398826fd636b037775fc05afe86c14" gracePeriod=30 Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.233464 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.233524 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a01bee48-5529-47a8-9395-797b6113ce6a" containerName="cinder-api" containerID="cri-o://5f8fbfe80bc83ff434a0bb7c2285f4d04dcdb54ed43f418d2663bb49fe1376af" gracePeriod=30 Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.252407 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"240e88fd-9bca-4893-8ed9-3be361b7e220","Type":"ContainerStarted","Data":"b94e42daddf885b2fda46ad859a50156c31f6b6455c3bfb675d652bd80bca979"} Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.274857 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=9.811637326 podStartE2EDuration="11.27483117s" podCreationTimestamp="2025-12-16 12:17:09 +0000 UTC" firstStartedPulling="2025-12-16 12:17:12.359055591 +0000 UTC m=+1306.077313396" lastFinishedPulling="2025-12-16 12:17:13.822249435 +0000 UTC m=+1307.540507240" observedRunningTime="2025-12-16 12:17:20.254485627 +0000 UTC m=+1313.972743442" watchObservedRunningTime="2025-12-16 12:17:20.27483117 +0000 UTC m=+1313.993088985" Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.276153 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57fbfd7dcc-lq2v9" event={"ID":"26f9167a-aa3e-4381-aac0-e0aaea7449a8","Type":"ContainerStarted","Data":"9831a4294c408ef0b158e4beaa85133dfa3718171771b8393c2d035f5129e43a"} Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.288126 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.288103201 podStartE2EDuration="10.288103201s" podCreationTimestamp="2025-12-16 12:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:20.283749106 +0000 UTC m=+1314.002006961" watchObservedRunningTime="2025-12-16 12:17:20.288103201 +0000 UTC m=+1314.006361026" Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.655936 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.755497 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-xrd22"] Dec 16 12:17:20 crc kubenswrapper[4805]: I1216 12:17:20.755794 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" podUID="a451754e-cb72-4c93-833e-2c10d5b3bea6" containerName="dnsmasq-dns" containerID="cri-o://a67bf69d873a42d15dcabd48737d4143249c1615916a1408ab38bba5b23cd924" gracePeriod=10 Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.300974 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57fbfd7dcc-lq2v9" event={"ID":"26f9167a-aa3e-4381-aac0-e0aaea7449a8","Type":"ContainerStarted","Data":"ea2fdbcd3d6ebcc183948bad810f58cd4097c6a9940871679676ca5b2632b0e0"} Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.306808 4805 generic.go:334] "Generic (PLEG): container finished" podID="a451754e-cb72-4c93-833e-2c10d5b3bea6" containerID="a67bf69d873a42d15dcabd48737d4143249c1615916a1408ab38bba5b23cd924" exitCode=0 Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.306887 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" event={"ID":"a451754e-cb72-4c93-833e-2c10d5b3bea6","Type":"ContainerDied","Data":"a67bf69d873a42d15dcabd48737d4143249c1615916a1408ab38bba5b23cd924"} Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.310605 4805 generic.go:334] "Generic (PLEG): container finished" podID="a01bee48-5529-47a8-9395-797b6113ce6a" containerID="5f8fbfe80bc83ff434a0bb7c2285f4d04dcdb54ed43f418d2663bb49fe1376af" exitCode=0 Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.310641 4805 generic.go:334] "Generic (PLEG): container finished" podID="a01bee48-5529-47a8-9395-797b6113ce6a" containerID="945ad6b3912b62c706f5579aa7f4cfec61398826fd636b037775fc05afe86c14" exitCode=143 Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.311609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a01bee48-5529-47a8-9395-797b6113ce6a","Type":"ContainerDied","Data":"5f8fbfe80bc83ff434a0bb7c2285f4d04dcdb54ed43f418d2663bb49fe1376af"} Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.311648 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a01bee48-5529-47a8-9395-797b6113ce6a","Type":"ContainerDied","Data":"945ad6b3912b62c706f5579aa7f4cfec61398826fd636b037775fc05afe86c14"} Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.824416 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.879681 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data\") pod \"a01bee48-5529-47a8-9395-797b6113ce6a\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.879762 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data-custom\") pod \"a01bee48-5529-47a8-9395-797b6113ce6a\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.879785 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-scripts\") pod \"a01bee48-5529-47a8-9395-797b6113ce6a\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.882278 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-combined-ca-bundle\") pod \"a01bee48-5529-47a8-9395-797b6113ce6a\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.882404 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a01bee48-5529-47a8-9395-797b6113ce6a-logs\") pod \"a01bee48-5529-47a8-9395-797b6113ce6a\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.882435 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7zx6\" (UniqueName: \"kubernetes.io/projected/a01bee48-5529-47a8-9395-797b6113ce6a-kube-api-access-n7zx6\") pod \"a01bee48-5529-47a8-9395-797b6113ce6a\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.882547 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a01bee48-5529-47a8-9395-797b6113ce6a-etc-machine-id\") pod \"a01bee48-5529-47a8-9395-797b6113ce6a\" (UID: \"a01bee48-5529-47a8-9395-797b6113ce6a\") " Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.883268 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a01bee48-5529-47a8-9395-797b6113ce6a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a01bee48-5529-47a8-9395-797b6113ce6a" (UID: "a01bee48-5529-47a8-9395-797b6113ce6a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.884442 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a01bee48-5529-47a8-9395-797b6113ce6a-logs" (OuterVolumeSpecName: "logs") pod "a01bee48-5529-47a8-9395-797b6113ce6a" (UID: "a01bee48-5529-47a8-9395-797b6113ce6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.895190 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a01bee48-5529-47a8-9395-797b6113ce6a" (UID: "a01bee48-5529-47a8-9395-797b6113ce6a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.895686 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-scripts" (OuterVolumeSpecName: "scripts") pod "a01bee48-5529-47a8-9395-797b6113ce6a" (UID: "a01bee48-5529-47a8-9395-797b6113ce6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:21 crc kubenswrapper[4805]: I1216 12:17:21.907730 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01bee48-5529-47a8-9395-797b6113ce6a-kube-api-access-n7zx6" (OuterVolumeSpecName: "kube-api-access-n7zx6") pod "a01bee48-5529-47a8-9395-797b6113ce6a" (UID: "a01bee48-5529-47a8-9395-797b6113ce6a"). InnerVolumeSpecName "kube-api-access-n7zx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:21.990680 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a01bee48-5529-47a8-9395-797b6113ce6a-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:21.990717 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7zx6\" (UniqueName: \"kubernetes.io/projected/a01bee48-5529-47a8-9395-797b6113ce6a-kube-api-access-n7zx6\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:21.990728 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a01bee48-5529-47a8-9395-797b6113ce6a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:21.990738 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:21.990746 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.092831 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a01bee48-5529-47a8-9395-797b6113ce6a" (UID: "a01bee48-5529-47a8-9395-797b6113ce6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.132344 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data" (OuterVolumeSpecName: "config-data") pod "a01bee48-5529-47a8-9395-797b6113ce6a" (UID: "a01bee48-5529-47a8-9395-797b6113ce6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.156691 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.194365 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.194400 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a01bee48-5529-47a8-9395-797b6113ce6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.295433 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njq2g\" (UniqueName: \"kubernetes.io/projected/a451754e-cb72-4c93-833e-2c10d5b3bea6-kube-api-access-njq2g\") pod \"a451754e-cb72-4c93-833e-2c10d5b3bea6\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.295561 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-sb\") pod \"a451754e-cb72-4c93-833e-2c10d5b3bea6\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.295637 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-svc\") pod \"a451754e-cb72-4c93-833e-2c10d5b3bea6\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.295695 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-nb\") pod \"a451754e-cb72-4c93-833e-2c10d5b3bea6\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.295720 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-swift-storage-0\") pod \"a451754e-cb72-4c93-833e-2c10d5b3bea6\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.295778 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-config\") pod \"a451754e-cb72-4c93-833e-2c10d5b3bea6\" (UID: \"a451754e-cb72-4c93-833e-2c10d5b3bea6\") " Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.314906 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a451754e-cb72-4c93-833e-2c10d5b3bea6-kube-api-access-njq2g" (OuterVolumeSpecName: "kube-api-access-njq2g") pod "a451754e-cb72-4c93-833e-2c10d5b3bea6" (UID: "a451754e-cb72-4c93-833e-2c10d5b3bea6"). InnerVolumeSpecName "kube-api-access-njq2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.344491 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a01bee48-5529-47a8-9395-797b6113ce6a","Type":"ContainerDied","Data":"c41a162420950cbea14d4d4d0d01b0017fa026b4d63cbc7efae7138d09e6418c"} Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.344550 4805 scope.go:117] "RemoveContainer" containerID="5f8fbfe80bc83ff434a0bb7c2285f4d04dcdb54ed43f418d2663bb49fe1376af" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.344559 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.381343 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" event={"ID":"a451754e-cb72-4c93-833e-2c10d5b3bea6","Type":"ContainerDied","Data":"0bd3257e223f50005aff9bc6f7e4901f28a13b6b76a107de091124929c393bff"} Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.387525 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-xrd22" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.395196 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57fbfd7dcc-lq2v9" event={"ID":"26f9167a-aa3e-4381-aac0-e0aaea7449a8","Type":"ContainerStarted","Data":"2cb1989c2370278811b4e63ba2f64b9c79e25914de0e27c11b061acf7af09f58"} Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.437334 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.437885 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njq2g\" (UniqueName: \"kubernetes.io/projected/a451754e-cb72-4c93-833e-2c10d5b3bea6-kube-api-access-njq2g\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.461393 4805 scope.go:117] "RemoveContainer" containerID="945ad6b3912b62c706f5579aa7f4cfec61398826fd636b037775fc05afe86c14" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.655777 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-57fbfd7dcc-lq2v9" podStartSLOduration=4.655751723 podStartE2EDuration="4.655751723s" podCreationTimestamp="2025-12-16 12:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:22.590166792 +0000 UTC m=+1316.308424597" watchObservedRunningTime="2025-12-16 12:17:22.655751723 +0000 UTC m=+1316.374009538" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.682939 4805 scope.go:117] "RemoveContainer" containerID="a67bf69d873a42d15dcabd48737d4143249c1615916a1408ab38bba5b23cd924" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.718747 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a451754e-cb72-4c93-833e-2c10d5b3bea6" (UID: "a451754e-cb72-4c93-833e-2c10d5b3bea6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.734780 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.745648 4805 scope.go:117] "RemoveContainer" containerID="4e86dd9a447d7aea5b9f40c28a518dd59ed11293751c3a0bcac02a831c7e65eb" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.753367 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.764248 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 16 12:17:22 crc kubenswrapper[4805]: E1216 12:17:22.764789 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01bee48-5529-47a8-9395-797b6113ce6a" containerName="cinder-api" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.764810 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01bee48-5529-47a8-9395-797b6113ce6a" containerName="cinder-api" Dec 16 12:17:22 crc kubenswrapper[4805]: E1216 12:17:22.764833 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a451754e-cb72-4c93-833e-2c10d5b3bea6" containerName="init" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.764841 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a451754e-cb72-4c93-833e-2c10d5b3bea6" containerName="init" Dec 16 12:17:22 crc kubenswrapper[4805]: E1216 12:17:22.764864 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a451754e-cb72-4c93-833e-2c10d5b3bea6" containerName="dnsmasq-dns" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.764874 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a451754e-cb72-4c93-833e-2c10d5b3bea6" containerName="dnsmasq-dns" Dec 16 12:17:22 crc kubenswrapper[4805]: E1216 12:17:22.764889 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01bee48-5529-47a8-9395-797b6113ce6a" containerName="cinder-api-log" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.764896 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01bee48-5529-47a8-9395-797b6113ce6a" containerName="cinder-api-log" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.765111 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a451754e-cb72-4c93-833e-2c10d5b3bea6" containerName="dnsmasq-dns" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.765136 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01bee48-5529-47a8-9395-797b6113ce6a" containerName="cinder-api" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.765177 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01bee48-5529-47a8-9395-797b6113ce6a" containerName="cinder-api-log" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.766565 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.769651 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.769719 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 16 12:17:22 crc kubenswrapper[4805]: E1216 12:17:22.774270 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda01bee48_5529_47a8_9395_797b6113ce6a.slice/crio-c41a162420950cbea14d4d4d0d01b0017fa026b4d63cbc7efae7138d09e6418c\": RecentStats: unable to find data in memory cache]" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.776190 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.783258 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.795072 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.893122 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a451754e-cb72-4c93-833e-2c10d5b3bea6" (UID: "a451754e-cb72-4c93-833e-2c10d5b3bea6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.896343 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-scripts\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.896406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.896461 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.896481 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.896498 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-config-data-custom\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.896516 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzktv\" (UniqueName: \"kubernetes.io/projected/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-kube-api-access-nzktv\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.896538 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.896569 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-logs\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.896622 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-config-data\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.896681 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.976360 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a451754e-cb72-4c93-833e-2c10d5b3bea6" (UID: "a451754e-cb72-4c93-833e-2c10d5b3bea6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.982886 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a451754e-cb72-4c93-833e-2c10d5b3bea6" (UID: "a451754e-cb72-4c93-833e-2c10d5b3bea6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.999357 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-config-data\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.999455 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-scripts\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.999485 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.999552 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.999586 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.999610 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-config-data-custom\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.999636 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzktv\" (UniqueName: \"kubernetes.io/projected/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-kube-api-access-nzktv\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.999675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:22 crc kubenswrapper[4805]: I1216 12:17:22.999725 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-logs\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:22.999784 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:22.999796 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.002342 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-config" (OuterVolumeSpecName: "config") pod "a451754e-cb72-4c93-833e-2c10d5b3bea6" (UID: "a451754e-cb72-4c93-833e-2c10d5b3bea6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.007350 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-config-data-custom\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.007559 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.007649 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-logs\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.013920 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.016517 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-config-data\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.016775 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.017070 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.022793 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-scripts\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.064075 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzktv\" (UniqueName: \"kubernetes.io/projected/bdfe5fb7-a1ea-4cd6-887d-46b30ece2329-kube-api-access-nzktv\") pod \"cinder-api-0\" (UID: \"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329\") " pod="openstack/cinder-api-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.098298 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.104197 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a451754e-cb72-4c93-833e-2c10d5b3bea6-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.110107 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.110380 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerName="glance-log" containerID="cri-o://2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec" gracePeriod=30 Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.110865 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerName="glance-httpd" containerID="cri-o://8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13" gracePeriod=30 Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.328599 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-xrd22"] Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.340969 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-xrd22"] Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.484681 4805 generic.go:334] "Generic (PLEG): container finished" podID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerID="2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec" exitCode=143 Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.485106 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41274a07-eb55-48ed-9ebd-35edbc26f4f4","Type":"ContainerDied","Data":"2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec"} Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.571950 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"240e88fd-9bca-4893-8ed9-3be361b7e220","Type":"ContainerStarted","Data":"7ea2619807242d249a4765df5b1e8f68ecabfb7f80819ebb4d6a7093964fdac9"} Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.572022 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.628882 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.462100356 podStartE2EDuration="16.628853751s" podCreationTimestamp="2025-12-16 12:17:07 +0000 UTC" firstStartedPulling="2025-12-16 12:17:08.402121798 +0000 UTC m=+1302.120379603" lastFinishedPulling="2025-12-16 12:17:21.568875193 +0000 UTC m=+1315.287132998" observedRunningTime="2025-12-16 12:17:23.602028022 +0000 UTC m=+1317.320285847" watchObservedRunningTime="2025-12-16 12:17:23.628853751 +0000 UTC m=+1317.347111576" Dec 16 12:17:23 crc kubenswrapper[4805]: I1216 12:17:23.721808 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 16 12:17:24 crc kubenswrapper[4805]: I1216 12:17:24.534823 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a01bee48-5529-47a8-9395-797b6113ce6a" path="/var/lib/kubelet/pods/a01bee48-5529-47a8-9395-797b6113ce6a/volumes" Dec 16 12:17:24 crc kubenswrapper[4805]: I1216 12:17:24.537126 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a451754e-cb72-4c93-833e-2c10d5b3bea6" path="/var/lib/kubelet/pods/a451754e-cb72-4c93-833e-2c10d5b3bea6/volumes" Dec 16 12:17:24 crc kubenswrapper[4805]: I1216 12:17:24.581454 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329","Type":"ContainerStarted","Data":"b9235d9e89998ffdbdc511f00f575698497bdfb53244099c71e4695974365e27"} Dec 16 12:17:24 crc kubenswrapper[4805]: I1216 12:17:24.763955 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.175393 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.176854 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="d319359d-f646-42d6-95ac-4a64f4177e1e" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.163:8080/\": dial tcp 10.217.0.163:8080: connect: connection refused" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.321877 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-4kzsj"] Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.323684 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4kzsj" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.364339 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-4kzsj"] Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.399476 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsd56\" (UniqueName: \"kubernetes.io/projected/9202f107-b728-488a-a01a-c4412be6e808-kube-api-access-dsd56\") pod \"nova-api-db-create-4kzsj\" (UID: \"9202f107-b728-488a-a01a-c4412be6e808\") " pod="openstack/nova-api-db-create-4kzsj" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.427861 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-fb4np"] Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.429695 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fb4np" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.498944 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fb4np"] Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.514210 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsd56\" (UniqueName: \"kubernetes.io/projected/9202f107-b728-488a-a01a-c4412be6e808-kube-api-access-dsd56\") pod \"nova-api-db-create-4kzsj\" (UID: \"9202f107-b728-488a-a01a-c4412be6e808\") " pod="openstack/nova-api-db-create-4kzsj" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.515734 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5548\" (UniqueName: \"kubernetes.io/projected/d9477902-43a4-4af0-9d5b-0a063514b7e9-kube-api-access-v5548\") pod \"nova-cell0-db-create-fb4np\" (UID: \"d9477902-43a4-4af0-9d5b-0a063514b7e9\") " pod="openstack/nova-cell0-db-create-fb4np" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.601564 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9hjq9"] Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.601802 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsd56\" (UniqueName: \"kubernetes.io/projected/9202f107-b728-488a-a01a-c4412be6e808-kube-api-access-dsd56\") pod \"nova-api-db-create-4kzsj\" (UID: \"9202f107-b728-488a-a01a-c4412be6e808\") " pod="openstack/nova-api-db-create-4kzsj" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.617025 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9hjq9" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.618865 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5548\" (UniqueName: \"kubernetes.io/projected/d9477902-43a4-4af0-9d5b-0a063514b7e9-kube-api-access-v5548\") pod \"nova-cell0-db-create-fb4np\" (UID: \"d9477902-43a4-4af0-9d5b-0a063514b7e9\") " pod="openstack/nova-cell0-db-create-fb4np" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.648875 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329","Type":"ContainerStarted","Data":"94dd797a3afd04c3b0d888532642a80b93ec9abc16ec98f29ddbbe3e9c52f59a"} Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.648937 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="ceilometer-central-agent" containerID="cri-o://b4392bb4d66d6d693f3904acf5dfb8b2f8564ead6405e5a086ff58f03714cc3a" gracePeriod=30 Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.649001 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="sg-core" containerID="cri-o://b94e42daddf885b2fda46ad859a50156c31f6b6455c3bfb675d652bd80bca979" gracePeriod=30 Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.649032 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="ceilometer-notification-agent" containerID="cri-o://83537b3a80b5fefac225915ab72a7fbd4c12a7e5a26dadd67e61c98e81868e3f" gracePeriod=30 Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.649094 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="proxy-httpd" containerID="cri-o://7ea2619807242d249a4765df5b1e8f68ecabfb7f80819ebb4d6a7093964fdac9" gracePeriod=30 Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.664978 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9hjq9"] Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.693398 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5548\" (UniqueName: \"kubernetes.io/projected/d9477902-43a4-4af0-9d5b-0a063514b7e9-kube-api-access-v5548\") pod \"nova-cell0-db-create-fb4np\" (UID: \"d9477902-43a4-4af0-9d5b-0a063514b7e9\") " pod="openstack/nova-cell0-db-create-fb4np" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.693876 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4kzsj" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.727502 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rjtr\" (UniqueName: \"kubernetes.io/projected/ceb88d05-d1a2-4da2-a4ec-64649b5069e9-kube-api-access-4rjtr\") pod \"nova-cell1-db-create-9hjq9\" (UID: \"ceb88d05-d1a2-4da2-a4ec-64649b5069e9\") " pod="openstack/nova-cell1-db-create-9hjq9" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.763619 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fb4np" Dec 16 12:17:25 crc kubenswrapper[4805]: I1216 12:17:25.954517 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rjtr\" (UniqueName: \"kubernetes.io/projected/ceb88d05-d1a2-4da2-a4ec-64649b5069e9-kube-api-access-4rjtr\") pod \"nova-cell1-db-create-9hjq9\" (UID: \"ceb88d05-d1a2-4da2-a4ec-64649b5069e9\") " pod="openstack/nova-cell1-db-create-9hjq9" Dec 16 12:17:26 crc kubenswrapper[4805]: I1216 12:17:26.002972 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rjtr\" (UniqueName: \"kubernetes.io/projected/ceb88d05-d1a2-4da2-a4ec-64649b5069e9-kube-api-access-4rjtr\") pod \"nova-cell1-db-create-9hjq9\" (UID: \"ceb88d05-d1a2-4da2-a4ec-64649b5069e9\") " pod="openstack/nova-cell1-db-create-9hjq9" Dec 16 12:17:26 crc kubenswrapper[4805]: I1216 12:17:26.263493 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9hjq9" Dec 16 12:17:26 crc kubenswrapper[4805]: I1216 12:17:26.667454 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-4kzsj"] Dec 16 12:17:26 crc kubenswrapper[4805]: W1216 12:17:26.669100 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9202f107_b728_488a_a01a_c4412be6e808.slice/crio-1112463a420ffe8aee7eebf813e39beb312f1f9ffdd098ea76a2b4b06ec374a1 WatchSource:0}: Error finding container 1112463a420ffe8aee7eebf813e39beb312f1f9ffdd098ea76a2b4b06ec374a1: Status 404 returned error can't find the container with id 1112463a420ffe8aee7eebf813e39beb312f1f9ffdd098ea76a2b4b06ec374a1 Dec 16 12:17:26 crc kubenswrapper[4805]: I1216 12:17:26.686418 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.149:9292/healthcheck\": read tcp 10.217.0.2:58870->10.217.0.149:9292: read: connection reset by peer" Dec 16 12:17:26 crc kubenswrapper[4805]: I1216 12:17:26.686537 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.149:9292/healthcheck\": read tcp 10.217.0.2:58876->10.217.0.149:9292: read: connection reset by peer" Dec 16 12:17:26 crc kubenswrapper[4805]: I1216 12:17:26.755201 4805 generic.go:334] "Generic (PLEG): container finished" podID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerID="b94e42daddf885b2fda46ad859a50156c31f6b6455c3bfb675d652bd80bca979" exitCode=2 Dec 16 12:17:26 crc kubenswrapper[4805]: I1216 12:17:26.755253 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"240e88fd-9bca-4893-8ed9-3be361b7e220","Type":"ContainerDied","Data":"b94e42daddf885b2fda46ad859a50156c31f6b6455c3bfb675d652bd80bca979"} Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.132672 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fb4np"] Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.619380 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9hjq9"] Dec 16 12:17:27 crc kubenswrapper[4805]: W1216 12:17:27.653585 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceb88d05_d1a2_4da2_a4ec_64649b5069e9.slice/crio-a9e13c3c0499eaecfa076a5cf35a9e2a5382d5ef50ea968c67bd89621545285f WatchSource:0}: Error finding container a9e13c3c0499eaecfa076a5cf35a9e2a5382d5ef50ea968c67bd89621545285f: Status 404 returned error can't find the container with id a9e13c3c0499eaecfa076a5cf35a9e2a5382d5ef50ea968c67bd89621545285f Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.755645 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.773653 4805 generic.go:334] "Generic (PLEG): container finished" podID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerID="8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13" exitCode=0 Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.773796 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.773852 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41274a07-eb55-48ed-9ebd-35edbc26f4f4","Type":"ContainerDied","Data":"8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13"} Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.774924 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"41274a07-eb55-48ed-9ebd-35edbc26f4f4","Type":"ContainerDied","Data":"467e7b21fe9aeac095d16e0da0c975251142a88d4b7effbb05b3fd402a21c45f"} Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.775045 4805 scope.go:117] "RemoveContainer" containerID="8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.781640 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-logs\") pod \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.781729 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-scripts\") pod \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.781768 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-config-data\") pod \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.781846 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6w9f\" (UniqueName: \"kubernetes.io/projected/41274a07-eb55-48ed-9ebd-35edbc26f4f4-kube-api-access-r6w9f\") pod \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.781948 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-combined-ca-bundle\") pod \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.781987 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-internal-tls-certs\") pod \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.782004 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-httpd-run\") pod \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.782034 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\" (UID: \"41274a07-eb55-48ed-9ebd-35edbc26f4f4\") " Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.783521 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "41274a07-eb55-48ed-9ebd-35edbc26f4f4" (UID: "41274a07-eb55-48ed-9ebd-35edbc26f4f4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.790549 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-logs" (OuterVolumeSpecName: "logs") pod "41274a07-eb55-48ed-9ebd-35edbc26f4f4" (UID: "41274a07-eb55-48ed-9ebd-35edbc26f4f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.811196 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-scripts" (OuterVolumeSpecName: "scripts") pod "41274a07-eb55-48ed-9ebd-35edbc26f4f4" (UID: "41274a07-eb55-48ed-9ebd-35edbc26f4f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.811455 4805 generic.go:334] "Generic (PLEG): container finished" podID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerID="7ea2619807242d249a4765df5b1e8f68ecabfb7f80819ebb4d6a7093964fdac9" exitCode=0 Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.811483 4805 generic.go:334] "Generic (PLEG): container finished" podID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerID="83537b3a80b5fefac225915ab72a7fbd4c12a7e5a26dadd67e61c98e81868e3f" exitCode=0 Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.811517 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"240e88fd-9bca-4893-8ed9-3be361b7e220","Type":"ContainerDied","Data":"7ea2619807242d249a4765df5b1e8f68ecabfb7f80819ebb4d6a7093964fdac9"} Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.811551 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"240e88fd-9bca-4893-8ed9-3be361b7e220","Type":"ContainerDied","Data":"83537b3a80b5fefac225915ab72a7fbd4c12a7e5a26dadd67e61c98e81868e3f"} Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.819806 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41274a07-eb55-48ed-9ebd-35edbc26f4f4-kube-api-access-r6w9f" (OuterVolumeSpecName: "kube-api-access-r6w9f") pod "41274a07-eb55-48ed-9ebd-35edbc26f4f4" (UID: "41274a07-eb55-48ed-9ebd-35edbc26f4f4"). InnerVolumeSpecName "kube-api-access-r6w9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.983701 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "41274a07-eb55-48ed-9ebd-35edbc26f4f4" (UID: "41274a07-eb55-48ed-9ebd-35edbc26f4f4"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.985488 4805 generic.go:334] "Generic (PLEG): container finished" podID="9202f107-b728-488a-a01a-c4412be6e808" containerID="b05d4c0ab79b9c45ce7e8d704b5c869253d328f8b530dd6a7c96fea000b507ec" exitCode=0 Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.985601 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-4kzsj" event={"ID":"9202f107-b728-488a-a01a-c4412be6e808","Type":"ContainerDied","Data":"b05d4c0ab79b9c45ce7e8d704b5c869253d328f8b530dd6a7c96fea000b507ec"} Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.985628 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-4kzsj" event={"ID":"9202f107-b728-488a-a01a-c4412be6e808","Type":"ContainerStarted","Data":"1112463a420ffe8aee7eebf813e39beb312f1f9ffdd098ea76a2b4b06ec374a1"} Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.987446 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.987511 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.987521 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41274a07-eb55-48ed-9ebd-35edbc26f4f4-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.987529 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.987538 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6w9f\" (UniqueName: \"kubernetes.io/projected/41274a07-eb55-48ed-9ebd-35edbc26f4f4-kube-api-access-r6w9f\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:27 crc kubenswrapper[4805]: I1216 12:17:27.991982 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9hjq9" event={"ID":"ceb88d05-d1a2-4da2-a4ec-64649b5069e9","Type":"ContainerStarted","Data":"a9e13c3c0499eaecfa076a5cf35a9e2a5382d5ef50ea968c67bd89621545285f"} Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.006367 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fb4np" event={"ID":"d9477902-43a4-4af0-9d5b-0a063514b7e9","Type":"ContainerStarted","Data":"435b3d9064770029d0682402f68cf61f00bc2b4e050917591798a869633ab361"} Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.049710 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41274a07-eb55-48ed-9ebd-35edbc26f4f4" (UID: "41274a07-eb55-48ed-9ebd-35edbc26f4f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.050085 4805 scope.go:117] "RemoveContainer" containerID="2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.054950 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bdfe5fb7-a1ea-4cd6-887d-46b30ece2329","Type":"ContainerStarted","Data":"42799479172d353c44c87cc4a622c19d9697151f8b4d234e929b0b680cbb21c5"} Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.076263 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.090808 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.119723 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-fb4np" podStartSLOduration=3.119678195 podStartE2EDuration="3.119678195s" podCreationTimestamp="2025-12-16 12:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:28.103569663 +0000 UTC m=+1321.821827478" watchObservedRunningTime="2025-12-16 12:17:28.119678195 +0000 UTC m=+1321.837936020" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.193617 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.193593115 podStartE2EDuration="6.193593115s" podCreationTimestamp="2025-12-16 12:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:28.164899742 +0000 UTC m=+1321.883157557" watchObservedRunningTime="2025-12-16 12:17:28.193593115 +0000 UTC m=+1321.911850930" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.241263 4805 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.299735 4805 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.327208 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-config-data" (OuterVolumeSpecName: "config-data") pod "41274a07-eb55-48ed-9ebd-35edbc26f4f4" (UID: "41274a07-eb55-48ed-9ebd-35edbc26f4f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.332375 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "41274a07-eb55-48ed-9ebd-35edbc26f4f4" (UID: "41274a07-eb55-48ed-9ebd-35edbc26f4f4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.401489 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.408493 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41274a07-eb55-48ed-9ebd-35edbc26f4f4-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.488721 4805 scope.go:117] "RemoveContainer" containerID="8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13" Dec 16 12:17:28 crc kubenswrapper[4805]: E1216 12:17:28.495132 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13\": container with ID starting with 8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13 not found: ID does not exist" containerID="8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.495242 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13"} err="failed to get container status \"8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13\": rpc error: code = NotFound desc = could not find container \"8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13\": container with ID starting with 8f9fbebe97f219721649f5c3f963ccd38f1df79ba3ef18a98c0ae98f8c60ab13 not found: ID does not exist" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.495295 4805 scope.go:117] "RemoveContainer" containerID="2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec" Dec 16 12:17:28 crc kubenswrapper[4805]: E1216 12:17:28.495870 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec\": container with ID starting with 2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec not found: ID does not exist" containerID="2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.495913 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec"} err="failed to get container status \"2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec\": rpc error: code = NotFound desc = could not find container \"2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec\": container with ID starting with 2441b4e70d165b40dc8fdc3e38db9226fe254d60d39f69bc54ffab09b95d41ec not found: ID does not exist" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.503107 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.515685 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.560650 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" path="/var/lib/kubelet/pods/41274a07-eb55-48ed-9ebd-35edbc26f4f4/volumes" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.561505 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:17:28 crc kubenswrapper[4805]: E1216 12:17:28.561914 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerName="glance-httpd" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.562007 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerName="glance-httpd" Dec 16 12:17:28 crc kubenswrapper[4805]: E1216 12:17:28.562088 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerName="glance-log" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.562158 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerName="glance-log" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.562463 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerName="glance-httpd" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.562595 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="41274a07-eb55-48ed-9ebd-35edbc26f4f4" containerName="glance-log" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.579519 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.587312 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.588889 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.596259 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.893346 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.893404 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7ccad5e-bb55-4439-964a-2830bacf95e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.893457 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.893480 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.893531 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7ccad5e-bb55-4439-964a-2830bacf95e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.893563 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.893634 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.893661 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srfwx\" (UniqueName: \"kubernetes.io/projected/b7ccad5e-bb55-4439-964a-2830bacf95e2-kube-api-access-srfwx\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.996311 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.996370 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srfwx\" (UniqueName: \"kubernetes.io/projected/b7ccad5e-bb55-4439-964a-2830bacf95e2-kube-api-access-srfwx\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.996526 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.996574 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7ccad5e-bb55-4439-964a-2830bacf95e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.996635 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.996661 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.996735 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7ccad5e-bb55-4439-964a-2830bacf95e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.996786 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.997193 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.997230 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7ccad5e-bb55-4439-964a-2830bacf95e2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:28 crc kubenswrapper[4805]: I1216 12:17:28.997267 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7ccad5e-bb55-4439-964a-2830bacf95e2-logs\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:29 crc kubenswrapper[4805]: I1216 12:17:29.004115 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:29 crc kubenswrapper[4805]: I1216 12:17:29.025664 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:29 crc kubenswrapper[4805]: I1216 12:17:29.026068 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:29 crc kubenswrapper[4805]: I1216 12:17:29.028359 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ccad5e-bb55-4439-964a-2830bacf95e2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:29 crc kubenswrapper[4805]: I1216 12:17:29.033764 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srfwx\" (UniqueName: \"kubernetes.io/projected/b7ccad5e-bb55-4439-964a-2830bacf95e2-kube-api-access-srfwx\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:29 crc kubenswrapper[4805]: I1216 12:17:29.060221 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b7ccad5e-bb55-4439-964a-2830bacf95e2\") " pod="openstack/glance-default-internal-api-0" Dec 16 12:17:29 crc kubenswrapper[4805]: I1216 12:17:29.088246 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:29 crc kubenswrapper[4805]: I1216 12:17:29.129785 4805 generic.go:334] "Generic (PLEG): container finished" podID="ceb88d05-d1a2-4da2-a4ec-64649b5069e9" containerID="ccb345c51a5114a61f7c2937be509774097c505e5bbe196b3c0661a7b305d2a1" exitCode=0 Dec 16 12:17:29 crc kubenswrapper[4805]: I1216 12:17:29.129877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9hjq9" event={"ID":"ceb88d05-d1a2-4da2-a4ec-64649b5069e9","Type":"ContainerDied","Data":"ccb345c51a5114a61f7c2937be509774097c505e5bbe196b3c0661a7b305d2a1"} Dec 16 12:17:29 crc kubenswrapper[4805]: I1216 12:17:29.137405 4805 generic.go:334] "Generic (PLEG): container finished" podID="d9477902-43a4-4af0-9d5b-0a063514b7e9" containerID="953796098ece18db3a779a60e5fcbe5a040afab6c7be6904f62098e0ba6a4b15" exitCode=0 Dec 16 12:17:29 crc kubenswrapper[4805]: I1216 12:17:29.137483 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fb4np" event={"ID":"d9477902-43a4-4af0-9d5b-0a063514b7e9","Type":"ContainerDied","Data":"953796098ece18db3a779a60e5fcbe5a040afab6c7be6904f62098e0ba6a4b15"} Dec 16 12:17:30 crc kubenswrapper[4805]: I1216 12:17:30.296030 4805 generic.go:334] "Generic (PLEG): container finished" podID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerID="b4392bb4d66d6d693f3904acf5dfb8b2f8564ead6405e5a086ff58f03714cc3a" exitCode=0 Dec 16 12:17:30 crc kubenswrapper[4805]: I1216 12:17:30.296468 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"240e88fd-9bca-4893-8ed9-3be361b7e220","Type":"ContainerDied","Data":"b4392bb4d66d6d693f3904acf5dfb8b2f8564ead6405e5a086ff58f03714cc3a"} Dec 16 12:17:30 crc kubenswrapper[4805]: I1216 12:17:30.479360 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:17:30 crc kubenswrapper[4805]: I1216 12:17:30.479653 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a0bdd7ef-5424-4628-b866-0aa42d44f257" containerName="glance-log" containerID="cri-o://bde0dfdb973e64b63035ae69186cf5b44e448ad2da47174ca17f5d68bd341678" gracePeriod=30 Dec 16 12:17:30 crc kubenswrapper[4805]: I1216 12:17:30.479802 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a0bdd7ef-5424-4628-b866-0aa42d44f257" containerName="glance-httpd" containerID="cri-o://9bdf3d24720c99d7ad95d8dbacf0c09ce9332ae14b7624c92ceba29805599bca" gracePeriod=30 Dec 16 12:17:30 crc kubenswrapper[4805]: I1216 12:17:30.584012 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.022792 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4kzsj" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.126569 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsd56\" (UniqueName: \"kubernetes.io/projected/9202f107-b728-488a-a01a-c4412be6e808-kube-api-access-dsd56\") pod \"9202f107-b728-488a-a01a-c4412be6e808\" (UID: \"9202f107-b728-488a-a01a-c4412be6e808\") " Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.170682 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9202f107-b728-488a-a01a-c4412be6e808-kube-api-access-dsd56" (OuterVolumeSpecName: "kube-api-access-dsd56") pod "9202f107-b728-488a-a01a-c4412be6e808" (UID: "9202f107-b728-488a-a01a-c4412be6e808"). InnerVolumeSpecName "kube-api-access-dsd56". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.235868 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsd56\" (UniqueName: \"kubernetes.io/projected/9202f107-b728-488a-a01a-c4412be6e808-kube-api-access-dsd56\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.330881 4805 generic.go:334] "Generic (PLEG): container finished" podID="a0bdd7ef-5424-4628-b866-0aa42d44f257" containerID="bde0dfdb973e64b63035ae69186cf5b44e448ad2da47174ca17f5d68bd341678" exitCode=143 Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.330971 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0bdd7ef-5424-4628-b866-0aa42d44f257","Type":"ContainerDied","Data":"bde0dfdb973e64b63035ae69186cf5b44e448ad2da47174ca17f5d68bd341678"} Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.337964 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b7ccad5e-bb55-4439-964a-2830bacf95e2","Type":"ContainerStarted","Data":"ee10236fc17d696f0fea9b0dd87e5ca93963e49ed64e1d157b64ca5e7036bb40"} Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.344273 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-4kzsj" event={"ID":"9202f107-b728-488a-a01a-c4412be6e808","Type":"ContainerDied","Data":"1112463a420ffe8aee7eebf813e39beb312f1f9ffdd098ea76a2b4b06ec374a1"} Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.344314 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1112463a420ffe8aee7eebf813e39beb312f1f9ffdd098ea76a2b4b06ec374a1" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.344383 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-4kzsj" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.359574 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.466993 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.551891 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fb4np" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.661752 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9hjq9" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.714087 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.761660 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rjtr\" (UniqueName: \"kubernetes.io/projected/ceb88d05-d1a2-4da2-a4ec-64649b5069e9-kube-api-access-4rjtr\") pod \"ceb88d05-d1a2-4da2-a4ec-64649b5069e9\" (UID: \"ceb88d05-d1a2-4da2-a4ec-64649b5069e9\") " Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.761756 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5548\" (UniqueName: \"kubernetes.io/projected/d9477902-43a4-4af0-9d5b-0a063514b7e9-kube-api-access-v5548\") pod \"d9477902-43a4-4af0-9d5b-0a063514b7e9\" (UID: \"d9477902-43a4-4af0-9d5b-0a063514b7e9\") " Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.772393 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb88d05-d1a2-4da2-a4ec-64649b5069e9-kube-api-access-4rjtr" (OuterVolumeSpecName: "kube-api-access-4rjtr") pod "ceb88d05-d1a2-4da2-a4ec-64649b5069e9" (UID: "ceb88d05-d1a2-4da2-a4ec-64649b5069e9"). InnerVolumeSpecName "kube-api-access-4rjtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.811660 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9477902-43a4-4af0-9d5b-0a063514b7e9-kube-api-access-v5548" (OuterVolumeSpecName: "kube-api-access-v5548") pod "d9477902-43a4-4af0-9d5b-0a063514b7e9" (UID: "d9477902-43a4-4af0-9d5b-0a063514b7e9"). InnerVolumeSpecName "kube-api-access-v5548". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.869456 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh2n8\" (UniqueName: \"kubernetes.io/projected/240e88fd-9bca-4893-8ed9-3be361b7e220-kube-api-access-zh2n8\") pod \"240e88fd-9bca-4893-8ed9-3be361b7e220\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.869518 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-sg-core-conf-yaml\") pod \"240e88fd-9bca-4893-8ed9-3be361b7e220\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.869569 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-config-data\") pod \"240e88fd-9bca-4893-8ed9-3be361b7e220\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.869651 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-log-httpd\") pod \"240e88fd-9bca-4893-8ed9-3be361b7e220\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.869683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-scripts\") pod \"240e88fd-9bca-4893-8ed9-3be361b7e220\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.870523 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-run-httpd\") pod \"240e88fd-9bca-4893-8ed9-3be361b7e220\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.870577 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-combined-ca-bundle\") pod \"240e88fd-9bca-4893-8ed9-3be361b7e220\" (UID: \"240e88fd-9bca-4893-8ed9-3be361b7e220\") " Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.871058 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "240e88fd-9bca-4893-8ed9-3be361b7e220" (UID: "240e88fd-9bca-4893-8ed9-3be361b7e220"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.871659 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5548\" (UniqueName: \"kubernetes.io/projected/d9477902-43a4-4af0-9d5b-0a063514b7e9-kube-api-access-v5548\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.871679 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.871689 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rjtr\" (UniqueName: \"kubernetes.io/projected/ceb88d05-d1a2-4da2-a4ec-64649b5069e9-kube-api-access-4rjtr\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.871988 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "240e88fd-9bca-4893-8ed9-3be361b7e220" (UID: "240e88fd-9bca-4893-8ed9-3be361b7e220"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.901272 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-scripts" (OuterVolumeSpecName: "scripts") pod "240e88fd-9bca-4893-8ed9-3be361b7e220" (UID: "240e88fd-9bca-4893-8ed9-3be361b7e220"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:31 crc kubenswrapper[4805]: I1216 12:17:31.906437 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240e88fd-9bca-4893-8ed9-3be361b7e220-kube-api-access-zh2n8" (OuterVolumeSpecName: "kube-api-access-zh2n8") pod "240e88fd-9bca-4893-8ed9-3be361b7e220" (UID: "240e88fd-9bca-4893-8ed9-3be361b7e220"). InnerVolumeSpecName "kube-api-access-zh2n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.018822 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/240e88fd-9bca-4893-8ed9-3be361b7e220-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.019476 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh2n8\" (UniqueName: \"kubernetes.io/projected/240e88fd-9bca-4893-8ed9-3be361b7e220-kube-api-access-zh2n8\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.019626 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.026441 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "240e88fd-9bca-4893-8ed9-3be361b7e220" (UID: "240e88fd-9bca-4893-8ed9-3be361b7e220"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.129312 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.165304 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "240e88fd-9bca-4893-8ed9-3be361b7e220" (UID: "240e88fd-9bca-4893-8ed9-3be361b7e220"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.167175 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-config-data" (OuterVolumeSpecName: "config-data") pod "240e88fd-9bca-4893-8ed9-3be361b7e220" (UID: "240e88fd-9bca-4893-8ed9-3be361b7e220"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.231479 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.231532 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240e88fd-9bca-4893-8ed9-3be361b7e220-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.367195 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9hjq9" event={"ID":"ceb88d05-d1a2-4da2-a4ec-64649b5069e9","Type":"ContainerDied","Data":"a9e13c3c0499eaecfa076a5cf35a9e2a5382d5ef50ea968c67bd89621545285f"} Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.367231 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9hjq9" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.367242 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e13c3c0499eaecfa076a5cf35a9e2a5382d5ef50ea968c67bd89621545285f" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.372888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fb4np" event={"ID":"d9477902-43a4-4af0-9d5b-0a063514b7e9","Type":"ContainerDied","Data":"435b3d9064770029d0682402f68cf61f00bc2b4e050917591798a869633ab361"} Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.372927 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="435b3d9064770029d0682402f68cf61f00bc2b4e050917591798a869633ab361" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.373008 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fb4np" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.379232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"240e88fd-9bca-4893-8ed9-3be361b7e220","Type":"ContainerDied","Data":"930d6fbc28bddd3bd7092d04b4b41b8e6db9e15bbc9cca09cd338c559cb684c3"} Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.379304 4805 scope.go:117] "RemoveContainer" containerID="7ea2619807242d249a4765df5b1e8f68ecabfb7f80819ebb4d6a7093964fdac9" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.379460 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.406747 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d319359d-f646-42d6-95ac-4a64f4177e1e" containerName="cinder-scheduler" containerID="cri-o://0d8af945ccef93ed8b85c8065f8935c78efb794a78b1f9ca470528f1f16bcca9" gracePeriod=30 Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.407064 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b7ccad5e-bb55-4439-964a-2830bacf95e2","Type":"ContainerStarted","Data":"22f7bd24adc514c8f2755edebc88264f746c68aa6cfe702995de9b6182eb567e"} Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.407101 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d319359d-f646-42d6-95ac-4a64f4177e1e" containerName="probe" containerID="cri-o://4de681c0a3b8611566616eefd51ad6e67e4fe891a8d5a217a864876cf94e07ec" gracePeriod=30 Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.453131 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.458411 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.501853 4805 scope.go:117] "RemoveContainer" containerID="b94e42daddf885b2fda46ad859a50156c31f6b6455c3bfb675d652bd80bca979" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.507479 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:32 crc kubenswrapper[4805]: E1216 12:17:32.507995 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb88d05-d1a2-4da2-a4ec-64649b5069e9" containerName="mariadb-database-create" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508014 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb88d05-d1a2-4da2-a4ec-64649b5069e9" containerName="mariadb-database-create" Dec 16 12:17:32 crc kubenswrapper[4805]: E1216 12:17:32.508035 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9477902-43a4-4af0-9d5b-0a063514b7e9" containerName="mariadb-database-create" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508042 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9477902-43a4-4af0-9d5b-0a063514b7e9" containerName="mariadb-database-create" Dec 16 12:17:32 crc kubenswrapper[4805]: E1216 12:17:32.508053 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="proxy-httpd" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508062 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="proxy-httpd" Dec 16 12:17:32 crc kubenswrapper[4805]: E1216 12:17:32.508104 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="ceilometer-notification-agent" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508113 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="ceilometer-notification-agent" Dec 16 12:17:32 crc kubenswrapper[4805]: E1216 12:17:32.508130 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="ceilometer-central-agent" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508158 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="ceilometer-central-agent" Dec 16 12:17:32 crc kubenswrapper[4805]: E1216 12:17:32.508176 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9202f107-b728-488a-a01a-c4412be6e808" containerName="mariadb-database-create" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508184 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9202f107-b728-488a-a01a-c4412be6e808" containerName="mariadb-database-create" Dec 16 12:17:32 crc kubenswrapper[4805]: E1216 12:17:32.508202 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="sg-core" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508211 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="sg-core" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508415 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="sg-core" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508440 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="proxy-httpd" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508456 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9477902-43a4-4af0-9d5b-0a063514b7e9" containerName="mariadb-database-create" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508477 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9202f107-b728-488a-a01a-c4412be6e808" containerName="mariadb-database-create" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508486 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="ceilometer-central-agent" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508498 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" containerName="ceilometer-notification-agent" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.508514 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb88d05-d1a2-4da2-a4ec-64649b5069e9" containerName="mariadb-database-create" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.531999 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.552491 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.552752 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.559995 4805 scope.go:117] "RemoveContainer" containerID="83537b3a80b5fefac225915ab72a7fbd4c12a7e5a26dadd67e61c98e81868e3f" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.595189 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="240e88fd-9bca-4893-8ed9-3be361b7e220" path="/var/lib/kubelet/pods/240e88fd-9bca-4893-8ed9-3be361b7e220/volumes" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.596313 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.649505 4805 scope.go:117] "RemoveContainer" containerID="b4392bb4d66d6d693f3904acf5dfb8b2f8564ead6405e5a086ff58f03714cc3a" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.662179 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-run-httpd\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.662234 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.662297 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-log-httpd\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.662456 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-scripts\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.668200 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-config-data\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.668245 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq57x\" (UniqueName: \"kubernetes.io/projected/8446402e-7a10-4670-8245-d8d25a3791e6-kube-api-access-kq57x\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.669637 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.772878 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-log-httpd\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.772998 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-scripts\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.773032 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-config-data\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.773050 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq57x\" (UniqueName: \"kubernetes.io/projected/8446402e-7a10-4670-8245-d8d25a3791e6-kube-api-access-kq57x\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.773087 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.773177 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-run-httpd\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.773201 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.777441 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-log-httpd\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.777651 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-run-httpd\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.780903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.789564 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-config-data\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.811007 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.827792 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-scripts\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.854746 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq57x\" (UniqueName: \"kubernetes.io/projected/8446402e-7a10-4670-8245-d8d25a3791e6-kube-api-access-kq57x\") pod \"ceilometer-0\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " pod="openstack/ceilometer-0" Dec 16 12:17:32 crc kubenswrapper[4805]: I1216 12:17:32.906475 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:17:33 crc kubenswrapper[4805]: I1216 12:17:33.850717 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 12:17:33 crc kubenswrapper[4805]: I1216 12:17:33.885773 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:34 crc kubenswrapper[4805]: I1216 12:17:34.772905 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b7ccad5e-bb55-4439-964a-2830bacf95e2","Type":"ContainerStarted","Data":"d48a3980431513c1cc598b1e5f04734fa981c0f402b269096dee890af63b5077"} Dec 16 12:17:34 crc kubenswrapper[4805]: I1216 12:17:34.794062 4805 generic.go:334] "Generic (PLEG): container finished" podID="a0bdd7ef-5424-4628-b866-0aa42d44f257" containerID="9bdf3d24720c99d7ad95d8dbacf0c09ce9332ae14b7624c92ceba29805599bca" exitCode=0 Dec 16 12:17:34 crc kubenswrapper[4805]: I1216 12:17:34.794127 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0bdd7ef-5424-4628-b866-0aa42d44f257","Type":"ContainerDied","Data":"9bdf3d24720c99d7ad95d8dbacf0c09ce9332ae14b7624c92ceba29805599bca"} Dec 16 12:17:34 crc kubenswrapper[4805]: I1216 12:17:34.807430 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8446402e-7a10-4670-8245-d8d25a3791e6","Type":"ContainerStarted","Data":"6fd8faf7d263d300a0d783cdbbb8c5a802d27d5ee12187c8aff29d68fc75fa4f"} Dec 16 12:17:34 crc kubenswrapper[4805]: I1216 12:17:34.809862 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.809824854 podStartE2EDuration="6.809824854s" podCreationTimestamp="2025-12-16 12:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:34.801964208 +0000 UTC m=+1328.520222013" watchObservedRunningTime="2025-12-16 12:17:34.809824854 +0000 UTC m=+1328.528082679" Dec 16 12:17:34 crc kubenswrapper[4805]: I1216 12:17:34.827752 4805 generic.go:334] "Generic (PLEG): container finished" podID="d319359d-f646-42d6-95ac-4a64f4177e1e" containerID="4de681c0a3b8611566616eefd51ad6e67e4fe891a8d5a217a864876cf94e07ec" exitCode=0 Dec 16 12:17:34 crc kubenswrapper[4805]: I1216 12:17:34.827783 4805 generic.go:334] "Generic (PLEG): container finished" podID="d319359d-f646-42d6-95ac-4a64f4177e1e" containerID="0d8af945ccef93ed8b85c8065f8935c78efb794a78b1f9ca470528f1f16bcca9" exitCode=0 Dec 16 12:17:34 crc kubenswrapper[4805]: I1216 12:17:34.827807 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d319359d-f646-42d6-95ac-4a64f4177e1e","Type":"ContainerDied","Data":"4de681c0a3b8611566616eefd51ad6e67e4fe891a8d5a217a864876cf94e07ec"} Dec 16 12:17:34 crc kubenswrapper[4805]: I1216 12:17:34.827834 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d319359d-f646-42d6-95ac-4a64f4177e1e","Type":"ContainerDied","Data":"0d8af945ccef93ed8b85c8065f8935c78efb794a78b1f9ca470528f1f16bcca9"} Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.438954 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.449132 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.583241 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-combined-ca-bundle\") pod \"a0bdd7ef-5424-4628-b866-0aa42d44f257\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.583728 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d319359d-f646-42d6-95ac-4a64f4177e1e-etc-machine-id\") pod \"d319359d-f646-42d6-95ac-4a64f4177e1e\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588196 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-scripts\") pod \"a0bdd7ef-5424-4628-b866-0aa42d44f257\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588408 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-combined-ca-bundle\") pod \"d319359d-f646-42d6-95ac-4a64f4177e1e\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588471 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-public-tls-certs\") pod \"a0bdd7ef-5424-4628-b866-0aa42d44f257\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588569 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbpr7\" (UniqueName: \"kubernetes.io/projected/a0bdd7ef-5424-4628-b866-0aa42d44f257-kube-api-access-vbpr7\") pod \"a0bdd7ef-5424-4628-b866-0aa42d44f257\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588635 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-httpd-run\") pod \"a0bdd7ef-5424-4628-b866-0aa42d44f257\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588658 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data\") pod \"d319359d-f646-42d6-95ac-4a64f4177e1e\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588699 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-scripts\") pod \"d319359d-f646-42d6-95ac-4a64f4177e1e\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588732 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-config-data\") pod \"a0bdd7ef-5424-4628-b866-0aa42d44f257\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588800 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-logs\") pod \"a0bdd7ef-5424-4628-b866-0aa42d44f257\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588833 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhc2w\" (UniqueName: \"kubernetes.io/projected/d319359d-f646-42d6-95ac-4a64f4177e1e-kube-api-access-fhc2w\") pod \"d319359d-f646-42d6-95ac-4a64f4177e1e\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588868 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"a0bdd7ef-5424-4628-b866-0aa42d44f257\" (UID: \"a0bdd7ef-5424-4628-b866-0aa42d44f257\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.588892 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data-custom\") pod \"d319359d-f646-42d6-95ac-4a64f4177e1e\" (UID: \"d319359d-f646-42d6-95ac-4a64f4177e1e\") " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.583993 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d319359d-f646-42d6-95ac-4a64f4177e1e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d319359d-f646-42d6-95ac-4a64f4177e1e" (UID: "d319359d-f646-42d6-95ac-4a64f4177e1e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.589679 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d319359d-f646-42d6-95ac-4a64f4177e1e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.593416 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-logs" (OuterVolumeSpecName: "logs") pod "a0bdd7ef-5424-4628-b866-0aa42d44f257" (UID: "a0bdd7ef-5424-4628-b866-0aa42d44f257"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.601333 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a0bdd7ef-5424-4628-b866-0aa42d44f257" (UID: "a0bdd7ef-5424-4628-b866-0aa42d44f257"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.601600 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3854-account-create-lmk7n"] Dec 16 12:17:35 crc kubenswrapper[4805]: E1216 12:17:35.602419 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d319359d-f646-42d6-95ac-4a64f4177e1e" containerName="probe" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.602521 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d319359d-f646-42d6-95ac-4a64f4177e1e" containerName="probe" Dec 16 12:17:35 crc kubenswrapper[4805]: E1216 12:17:35.602623 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bdd7ef-5424-4628-b866-0aa42d44f257" containerName="glance-log" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.602695 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bdd7ef-5424-4628-b866-0aa42d44f257" containerName="glance-log" Dec 16 12:17:35 crc kubenswrapper[4805]: E1216 12:17:35.602792 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d319359d-f646-42d6-95ac-4a64f4177e1e" containerName="cinder-scheduler" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.602874 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d319359d-f646-42d6-95ac-4a64f4177e1e" containerName="cinder-scheduler" Dec 16 12:17:35 crc kubenswrapper[4805]: E1216 12:17:35.602997 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bdd7ef-5424-4628-b866-0aa42d44f257" containerName="glance-httpd" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.603065 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bdd7ef-5424-4628-b866-0aa42d44f257" containerName="glance-httpd" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.606085 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "a0bdd7ef-5424-4628-b866-0aa42d44f257" (UID: "a0bdd7ef-5424-4628-b866-0aa42d44f257"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.611774 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0bdd7ef-5424-4628-b866-0aa42d44f257" containerName="glance-log" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.611858 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0bdd7ef-5424-4628-b866-0aa42d44f257" containerName="glance-httpd" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.611878 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d319359d-f646-42d6-95ac-4a64f4177e1e" containerName="probe" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.611905 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d319359d-f646-42d6-95ac-4a64f4177e1e" containerName="cinder-scheduler" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.612732 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3854-account-create-lmk7n" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.614111 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-scripts" (OuterVolumeSpecName: "scripts") pod "d319359d-f646-42d6-95ac-4a64f4177e1e" (UID: "d319359d-f646-42d6-95ac-4a64f4177e1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.616419 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d319359d-f646-42d6-95ac-4a64f4177e1e" (UID: "d319359d-f646-42d6-95ac-4a64f4177e1e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.617509 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.626170 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3854-account-create-lmk7n"] Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.648131 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d319359d-f646-42d6-95ac-4a64f4177e1e-kube-api-access-fhc2w" (OuterVolumeSpecName: "kube-api-access-fhc2w") pod "d319359d-f646-42d6-95ac-4a64f4177e1e" (UID: "d319359d-f646-42d6-95ac-4a64f4177e1e"). InnerVolumeSpecName "kube-api-access-fhc2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.648348 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0bdd7ef-5424-4628-b866-0aa42d44f257-kube-api-access-vbpr7" (OuterVolumeSpecName: "kube-api-access-vbpr7") pod "a0bdd7ef-5424-4628-b866-0aa42d44f257" (UID: "a0bdd7ef-5424-4628-b866-0aa42d44f257"). InnerVolumeSpecName "kube-api-access-vbpr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.689938 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-scripts" (OuterVolumeSpecName: "scripts") pod "a0bdd7ef-5424-4628-b866-0aa42d44f257" (UID: "a0bdd7ef-5424-4628-b866-0aa42d44f257"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.691568 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.691604 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbpr7\" (UniqueName: \"kubernetes.io/projected/a0bdd7ef-5424-4628-b866-0aa42d44f257-kube-api-access-vbpr7\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.691615 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.691624 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.691633 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0bdd7ef-5424-4628-b866-0aa42d44f257-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.691641 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhc2w\" (UniqueName: \"kubernetes.io/projected/d319359d-f646-42d6-95ac-4a64f4177e1e-kube-api-access-fhc2w\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.691661 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.691670 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.755749 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0bdd7ef-5424-4628-b866-0aa42d44f257" (UID: "a0bdd7ef-5424-4628-b866-0aa42d44f257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.779716 4805 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.793265 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxxt2\" (UniqueName: \"kubernetes.io/projected/46d3adf2-5db0-47b1-8488-39d3c7a0c8db-kube-api-access-cxxt2\") pod \"nova-api-3854-account-create-lmk7n\" (UID: \"46d3adf2-5db0-47b1-8488-39d3c7a0c8db\") " pod="openstack/nova-api-3854-account-create-lmk7n" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.793438 4805 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.793455 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.795451 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-493e-account-create-ldm5n"] Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.797803 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-493e-account-create-ldm5n" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.813474 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.826154 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-493e-account-create-ldm5n"] Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.826497 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a0bdd7ef-5424-4628-b866-0aa42d44f257" (UID: "a0bdd7ef-5424-4628-b866-0aa42d44f257"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.900341 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq6cv\" (UniqueName: \"kubernetes.io/projected/5868d05e-60ea-4a7a-b020-1f590921bd31-kube-api-access-lq6cv\") pod \"nova-cell0-493e-account-create-ldm5n\" (UID: \"5868d05e-60ea-4a7a-b020-1f590921bd31\") " pod="openstack/nova-cell0-493e-account-create-ldm5n" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.900468 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxxt2\" (UniqueName: \"kubernetes.io/projected/46d3adf2-5db0-47b1-8488-39d3c7a0c8db-kube-api-access-cxxt2\") pod \"nova-api-3854-account-create-lmk7n\" (UID: \"46d3adf2-5db0-47b1-8488-39d3c7a0c8db\") " pod="openstack/nova-api-3854-account-create-lmk7n" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.900535 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.909942 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-config-data" (OuterVolumeSpecName: "config-data") pod "a0bdd7ef-5424-4628-b866-0aa42d44f257" (UID: "a0bdd7ef-5424-4628-b866-0aa42d44f257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.912707 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8446402e-7a10-4670-8245-d8d25a3791e6","Type":"ContainerStarted","Data":"f1216b131f7c035f899f28871f636750d8927e6ec85e60838764c445d02e407d"} Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.941243 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxxt2\" (UniqueName: \"kubernetes.io/projected/46d3adf2-5db0-47b1-8488-39d3c7a0c8db-kube-api-access-cxxt2\") pod \"nova-api-3854-account-create-lmk7n\" (UID: \"46d3adf2-5db0-47b1-8488-39d3c7a0c8db\") " pod="openstack/nova-api-3854-account-create-lmk7n" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.944376 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d319359d-f646-42d6-95ac-4a64f4177e1e","Type":"ContainerDied","Data":"466930c7f8db91d8df4733a4c8b1d40c241fbe29f08df9d60940a39a803b49d2"} Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.944427 4805 scope.go:117] "RemoveContainer" containerID="4de681c0a3b8611566616eefd51ad6e67e4fe891a8d5a217a864876cf94e07ec" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.944559 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.951347 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data" (OuterVolumeSpecName: "config-data") pod "d319359d-f646-42d6-95ac-4a64f4177e1e" (UID: "d319359d-f646-42d6-95ac-4a64f4177e1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.951661 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d319359d-f646-42d6-95ac-4a64f4177e1e" (UID: "d319359d-f646-42d6-95ac-4a64f4177e1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.970026 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 16 12:17:35 crc kubenswrapper[4805]: I1216 12:17:35.973062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a0bdd7ef-5424-4628-b866-0aa42d44f257","Type":"ContainerDied","Data":"77dd06c4891fa1b9c374c8b1db0206e46f808de9c32691bd27ee06907543f093"} Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.003518 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq6cv\" (UniqueName: \"kubernetes.io/projected/5868d05e-60ea-4a7a-b020-1f590921bd31-kube-api-access-lq6cv\") pod \"nova-cell0-493e-account-create-ldm5n\" (UID: \"5868d05e-60ea-4a7a-b020-1f590921bd31\") " pod="openstack/nova-cell0-493e-account-create-ldm5n" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.003688 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.003702 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d319359d-f646-42d6-95ac-4a64f4177e1e-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.003713 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0bdd7ef-5424-4628-b866-0aa42d44f257-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.021967 4805 scope.go:117] "RemoveContainer" containerID="0d8af945ccef93ed8b85c8065f8935c78efb794a78b1f9ca470528f1f16bcca9" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.024932 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.038069 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.041823 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq6cv\" (UniqueName: \"kubernetes.io/projected/5868d05e-60ea-4a7a-b020-1f590921bd31-kube-api-access-lq6cv\") pod \"nova-cell0-493e-account-create-ldm5n\" (UID: \"5868d05e-60ea-4a7a-b020-1f590921bd31\") " pod="openstack/nova-cell0-493e-account-create-ldm5n" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.061772 4805 scope.go:117] "RemoveContainer" containerID="9bdf3d24720c99d7ad95d8dbacf0c09ce9332ae14b7624c92ceba29805599bca" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.077444 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.078994 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.081364 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.082711 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.094781 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.190350 4805 scope.go:117] "RemoveContainer" containerID="bde0dfdb973e64b63035ae69186cf5b44e448ad2da47174ca17f5d68bd341678" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.211119 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.211229 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.211286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cffzs\" (UniqueName: \"kubernetes.io/projected/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-kube-api-access-cffzs\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.211312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-config-data\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.211397 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-scripts\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.211425 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.211468 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-logs\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.211544 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.227387 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3854-account-create-lmk7n" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.256540 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-493e-account-create-ldm5n" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.318845 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.342078 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.342201 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.342476 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.342555 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.342641 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cffzs\" (UniqueName: \"kubernetes.io/projected/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-kube-api-access-cffzs\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.342679 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-config-data\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.342825 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-scripts\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.342866 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.342936 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-logs\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.343737 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-logs\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.344216 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.346382 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.365205 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.372831 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-scripts\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.373500 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.386970 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cffzs\" (UniqueName: \"kubernetes.io/projected/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-kube-api-access-cffzs\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.394108 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/645792d6-df6f-4e8c-a3cb-0b150ff5cd37-config-data\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.425604 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.459846 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"645792d6-df6f-4e8c-a3cb-0b150ff5cd37\") " pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.475246 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.475687 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.478440 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.550229 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-scripts\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.550577 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.550636 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.550755 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-config-data\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.550802 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.550931 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc8tp\" (UniqueName: \"kubernetes.io/projected/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-kube-api-access-lc8tp\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.591127 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0bdd7ef-5424-4628-b866-0aa42d44f257" path="/var/lib/kubelet/pods/a0bdd7ef-5424-4628-b866-0aa42d44f257/volumes" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.591958 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d319359d-f646-42d6-95ac-4a64f4177e1e" path="/var/lib/kubelet/pods/d319359d-f646-42d6-95ac-4a64f4177e1e/volumes" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.652832 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.652903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.653003 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-config-data\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.653042 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.653192 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc8tp\" (UniqueName: \"kubernetes.io/projected/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-kube-api-access-lc8tp\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.653296 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-scripts\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.654633 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.671999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.677646 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.677985 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-scripts\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.683362 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc8tp\" (UniqueName: \"kubernetes.io/projected/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-kube-api-access-lc8tp\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.687636 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67f02ee6-d5c4-4010-983e-c4ee5e24a2c0-config-data\") pod \"cinder-scheduler-0\" (UID: \"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0\") " pod="openstack/cinder-scheduler-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.708350 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 16 12:17:36 crc kubenswrapper[4805]: I1216 12:17:36.884348 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 16 12:17:37 crc kubenswrapper[4805]: I1216 12:17:37.019219 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3854-account-create-lmk7n"] Dec 16 12:17:37 crc kubenswrapper[4805]: I1216 12:17:37.121326 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="bdfe5fb7-a1ea-4cd6-887d-46b30ece2329" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.167:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:17:37 crc kubenswrapper[4805]: I1216 12:17:37.213431 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-493e-account-create-ldm5n"] Dec 16 12:17:37 crc kubenswrapper[4805]: I1216 12:17:37.729043 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 16 12:17:37 crc kubenswrapper[4805]: W1216 12:17:37.764906 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67f02ee6_d5c4_4010_983e_c4ee5e24a2c0.slice/crio-5329ea26f55ec2b03613e07d082303d5bc269b25e06fe3d8f2f12be8042c8bab WatchSource:0}: Error finding container 5329ea26f55ec2b03613e07d082303d5bc269b25e06fe3d8f2f12be8042c8bab: Status 404 returned error can't find the container with id 5329ea26f55ec2b03613e07d082303d5bc269b25e06fe3d8f2f12be8042c8bab Dec 16 12:17:37 crc kubenswrapper[4805]: I1216 12:17:37.821252 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 16 12:17:37 crc kubenswrapper[4805]: W1216 12:17:37.832672 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod645792d6_df6f_4e8c_a3cb_0b150ff5cd37.slice/crio-ad776e559a060912972cbbafe6ad20772bedafa184f5a60787fa5805f2a81c76 WatchSource:0}: Error finding container ad776e559a060912972cbbafe6ad20772bedafa184f5a60787fa5805f2a81c76: Status 404 returned error can't find the container with id ad776e559a060912972cbbafe6ad20772bedafa184f5a60787fa5805f2a81c76 Dec 16 12:17:38 crc kubenswrapper[4805]: I1216 12:17:38.061901 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0","Type":"ContainerStarted","Data":"5329ea26f55ec2b03613e07d082303d5bc269b25e06fe3d8f2f12be8042c8bab"} Dec 16 12:17:38 crc kubenswrapper[4805]: I1216 12:17:38.113467 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3854-account-create-lmk7n" event={"ID":"46d3adf2-5db0-47b1-8488-39d3c7a0c8db","Type":"ContainerStarted","Data":"649bde08cbfc53f2717f54067820c222776a9c09a29bf9bd1e91ccfbf0ff0d29"} Dec 16 12:17:38 crc kubenswrapper[4805]: I1216 12:17:38.113525 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3854-account-create-lmk7n" event={"ID":"46d3adf2-5db0-47b1-8488-39d3c7a0c8db","Type":"ContainerStarted","Data":"8e7aff75311828eff548813e0812c31bb168d2683b260888c33528cbeecd8628"} Dec 16 12:17:38 crc kubenswrapper[4805]: I1216 12:17:38.122928 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="bdfe5fb7-a1ea-4cd6-887d-46b30ece2329" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.167:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:17:38 crc kubenswrapper[4805]: I1216 12:17:38.124989 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"645792d6-df6f-4e8c-a3cb-0b150ff5cd37","Type":"ContainerStarted","Data":"ad776e559a060912972cbbafe6ad20772bedafa184f5a60787fa5805f2a81c76"} Dec 16 12:17:38 crc kubenswrapper[4805]: I1216 12:17:38.139043 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-3854-account-create-lmk7n" podStartSLOduration=3.139020723 podStartE2EDuration="3.139020723s" podCreationTimestamp="2025-12-16 12:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:38.1305518 +0000 UTC m=+1331.848809605" watchObservedRunningTime="2025-12-16 12:17:38.139020723 +0000 UTC m=+1331.857278548" Dec 16 12:17:38 crc kubenswrapper[4805]: I1216 12:17:38.164655 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8446402e-7a10-4670-8245-d8d25a3791e6","Type":"ContainerStarted","Data":"b473ff820f3b14d80210abc2a0a98b73aa376efa98ecb06aec429ead1a6cdf9b"} Dec 16 12:17:38 crc kubenswrapper[4805]: I1216 12:17:38.202674 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-493e-account-create-ldm5n" event={"ID":"5868d05e-60ea-4a7a-b020-1f590921bd31","Type":"ContainerStarted","Data":"5a9bf9c4d6c96f9e54ec44ab02db3f3a38a189e7957262ae1890c2746e5c5340"} Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.089243 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.089430 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.093451 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-59856f68c6-mltb2" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.093778 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-59856f68c6-mltb2" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.118230 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-59856f68c6-mltb2" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.141961 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.265790 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"645792d6-df6f-4e8c-a3cb-0b150ff5cd37","Type":"ContainerStarted","Data":"c76b5cd2189e4f6febed260412881a626e19d300a99b8e942f793136de273297"} Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.276037 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8446402e-7a10-4670-8245-d8d25a3791e6","Type":"ContainerStarted","Data":"fad9456ccf1b15a8d62e6161232edfff6d8b243c20d0fbb9d0140d616b516dfb"} Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.294509 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.299486 4805 generic.go:334] "Generic (PLEG): container finished" podID="5868d05e-60ea-4a7a-b020-1f590921bd31" containerID="ffae44ccbd45385089da2834ee56ff47a14f824b490e566daf19d2eb44a2b611" exitCode=0 Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.299556 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-493e-account-create-ldm5n" event={"ID":"5868d05e-60ea-4a7a-b020-1f590921bd31","Type":"ContainerDied","Data":"ffae44ccbd45385089da2834ee56ff47a14f824b490e566daf19d2eb44a2b611"} Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.313546 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0","Type":"ContainerStarted","Data":"67f22d20b5190a66b3844df88700394a95cd8200c55553cfcb12a306c73e77ca"} Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.349470 4805 generic.go:334] "Generic (PLEG): container finished" podID="46d3adf2-5db0-47b1-8488-39d3c7a0c8db" containerID="649bde08cbfc53f2717f54067820c222776a9c09a29bf9bd1e91ccfbf0ff0d29" exitCode=0 Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.351000 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3854-account-create-lmk7n" event={"ID":"46d3adf2-5db0-47b1-8488-39d3c7a0c8db","Type":"ContainerDied","Data":"649bde08cbfc53f2717f54067820c222776a9c09a29bf9bd1e91ccfbf0ff0d29"} Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.351031 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:39 crc kubenswrapper[4805]: I1216 12:17:39.351154 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:40 crc kubenswrapper[4805]: I1216 12:17:40.362934 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"67f02ee6-d5c4-4010-983e-c4ee5e24a2c0","Type":"ContainerStarted","Data":"281d4416930ac92e9b8afd700f13183aa2c8c0268e07359ffc3fab3caf92c4b8"} Dec 16 12:17:40 crc kubenswrapper[4805]: I1216 12:17:40.366221 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"645792d6-df6f-4e8c-a3cb-0b150ff5cd37","Type":"ContainerStarted","Data":"ed8769c8ff01c33ad628265470950a2688d9cde6cd4b32f57763d869a1c4771f"} Dec 16 12:17:40 crc kubenswrapper[4805]: I1216 12:17:40.400905 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.400879461 podStartE2EDuration="4.400879461s" podCreationTimestamp="2025-12-16 12:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:40.400657015 +0000 UTC m=+1334.118914820" watchObservedRunningTime="2025-12-16 12:17:40.400879461 +0000 UTC m=+1334.119137286" Dec 16 12:17:40 crc kubenswrapper[4805]: I1216 12:17:40.453001 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.4529747650000004 podStartE2EDuration="4.452974765s" podCreationTimestamp="2025-12-16 12:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:40.425011003 +0000 UTC m=+1334.143268828" watchObservedRunningTime="2025-12-16 12:17:40.452974765 +0000 UTC m=+1334.171232590" Dec 16 12:17:40 crc kubenswrapper[4805]: I1216 12:17:40.946892 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3854-account-create-lmk7n" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.034977 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxxt2\" (UniqueName: \"kubernetes.io/projected/46d3adf2-5db0-47b1-8488-39d3c7a0c8db-kube-api-access-cxxt2\") pod \"46d3adf2-5db0-47b1-8488-39d3c7a0c8db\" (UID: \"46d3adf2-5db0-47b1-8488-39d3c7a0c8db\") " Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.043382 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d3adf2-5db0-47b1-8488-39d3c7a0c8db-kube-api-access-cxxt2" (OuterVolumeSpecName: "kube-api-access-cxxt2") pod "46d3adf2-5db0-47b1-8488-39d3c7a0c8db" (UID: "46d3adf2-5db0-47b1-8488-39d3c7a0c8db"). InnerVolumeSpecName "kube-api-access-cxxt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.058683 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-493e-account-create-ldm5n" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.136791 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq6cv\" (UniqueName: \"kubernetes.io/projected/5868d05e-60ea-4a7a-b020-1f590921bd31-kube-api-access-lq6cv\") pod \"5868d05e-60ea-4a7a-b020-1f590921bd31\" (UID: \"5868d05e-60ea-4a7a-b020-1f590921bd31\") " Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.137601 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxxt2\" (UniqueName: \"kubernetes.io/projected/46d3adf2-5db0-47b1-8488-39d3c7a0c8db-kube-api-access-cxxt2\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.149422 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5868d05e-60ea-4a7a-b020-1f590921bd31-kube-api-access-lq6cv" (OuterVolumeSpecName: "kube-api-access-lq6cv") pod "5868d05e-60ea-4a7a-b020-1f590921bd31" (UID: "5868d05e-60ea-4a7a-b020-1f590921bd31"). InnerVolumeSpecName "kube-api-access-lq6cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.240729 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq6cv\" (UniqueName: \"kubernetes.io/projected/5868d05e-60ea-4a7a-b020-1f590921bd31-kube-api-access-lq6cv\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.404712 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8446402e-7a10-4670-8245-d8d25a3791e6","Type":"ContainerStarted","Data":"2b2eef2c110ce0997bd5c215cc1357b3feadad5e38774e8d422bf6093015ea54"} Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.406110 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.408842 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-493e-account-create-ldm5n" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.409222 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-493e-account-create-ldm5n" event={"ID":"5868d05e-60ea-4a7a-b020-1f590921bd31","Type":"ContainerDied","Data":"5a9bf9c4d6c96f9e54ec44ab02db3f3a38a189e7957262ae1890c2746e5c5340"} Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.409263 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a9bf9c4d6c96f9e54ec44ab02db3f3a38a189e7957262ae1890c2746e5c5340" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.412179 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3854-account-create-lmk7n" event={"ID":"46d3adf2-5db0-47b1-8488-39d3c7a0c8db","Type":"ContainerDied","Data":"8e7aff75311828eff548813e0812c31bb168d2683b260888c33528cbeecd8628"} Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.412213 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e7aff75311828eff548813e0812c31bb168d2683b260888c33528cbeecd8628" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.412259 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3854-account-create-lmk7n" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.412337 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.412368 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.448378 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.1753099750000002 podStartE2EDuration="9.448358792s" podCreationTimestamp="2025-12-16 12:17:32 +0000 UTC" firstStartedPulling="2025-12-16 12:17:33.850340286 +0000 UTC m=+1327.568598091" lastFinishedPulling="2025-12-16 12:17:40.123389103 +0000 UTC m=+1333.841646908" observedRunningTime="2025-12-16 12:17:41.433021153 +0000 UTC m=+1335.151278958" watchObservedRunningTime="2025-12-16 12:17:41.448358792 +0000 UTC m=+1335.166616607" Dec 16 12:17:41 crc kubenswrapper[4805]: I1216 12:17:41.885249 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 16 12:17:42 crc kubenswrapper[4805]: I1216 12:17:42.127353 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="bdfe5fb7-a1ea-4cd6-887d-46b30ece2329" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.167:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:17:43 crc kubenswrapper[4805]: I1216 12:17:43.127431 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="bdfe5fb7-a1ea-4cd6-887d-46b30ece2329" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.167:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:17:43 crc kubenswrapper[4805]: I1216 12:17:43.157702 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 16 12:17:44 crc kubenswrapper[4805]: I1216 12:17:44.483913 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:44 crc kubenswrapper[4805]: I1216 12:17:44.484376 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:17:44 crc kubenswrapper[4805]: I1216 12:17:44.485800 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 16 12:17:45 crc kubenswrapper[4805]: I1216 12:17:45.975870 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b002-account-create-v9rtw"] Dec 16 12:17:45 crc kubenswrapper[4805]: E1216 12:17:45.980596 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d3adf2-5db0-47b1-8488-39d3c7a0c8db" containerName="mariadb-account-create" Dec 16 12:17:45 crc kubenswrapper[4805]: I1216 12:17:45.980617 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d3adf2-5db0-47b1-8488-39d3c7a0c8db" containerName="mariadb-account-create" Dec 16 12:17:45 crc kubenswrapper[4805]: E1216 12:17:45.980635 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5868d05e-60ea-4a7a-b020-1f590921bd31" containerName="mariadb-account-create" Dec 16 12:17:45 crc kubenswrapper[4805]: I1216 12:17:45.980642 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5868d05e-60ea-4a7a-b020-1f590921bd31" containerName="mariadb-account-create" Dec 16 12:17:45 crc kubenswrapper[4805]: I1216 12:17:45.980802 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5868d05e-60ea-4a7a-b020-1f590921bd31" containerName="mariadb-account-create" Dec 16 12:17:45 crc kubenswrapper[4805]: I1216 12:17:45.980823 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d3adf2-5db0-47b1-8488-39d3c7a0c8db" containerName="mariadb-account-create" Dec 16 12:17:45 crc kubenswrapper[4805]: I1216 12:17:45.981421 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b002-account-create-v9rtw" Dec 16 12:17:45 crc kubenswrapper[4805]: I1216 12:17:45.987755 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.022053 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b002-account-create-v9rtw"] Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.047012 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx2nt\" (UniqueName: \"kubernetes.io/projected/cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd-kube-api-access-qx2nt\") pod \"nova-cell1-b002-account-create-v9rtw\" (UID: \"cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd\") " pod="openstack/nova-cell1-b002-account-create-v9rtw" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.148968 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx2nt\" (UniqueName: \"kubernetes.io/projected/cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd-kube-api-access-qx2nt\") pod \"nova-cell1-b002-account-create-v9rtw\" (UID: \"cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd\") " pod="openstack/nova-cell1-b002-account-create-v9rtw" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.195255 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx2nt\" (UniqueName: \"kubernetes.io/projected/cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd-kube-api-access-qx2nt\") pod \"nova-cell1-b002-account-create-v9rtw\" (UID: \"cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd\") " pod="openstack/nova-cell1-b002-account-create-v9rtw" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.244120 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jwhtf"] Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.245379 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.249322 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.250334 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.250455 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xdpdc" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.263904 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jwhtf"] Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.314833 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b002-account-create-v9rtw" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.352436 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7t5h\" (UniqueName: \"kubernetes.io/projected/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-kube-api-access-k7t5h\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.352529 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.352566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-scripts\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.352697 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-config-data\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.461494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.461573 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-scripts\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.461614 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-config-data\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.461689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7t5h\" (UniqueName: \"kubernetes.io/projected/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-kube-api-access-k7t5h\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.469980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-scripts\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.482105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-config-data\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.482997 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.493587 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7t5h\" (UniqueName: \"kubernetes.io/projected/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-kube-api-access-k7t5h\") pod \"nova-cell0-conductor-db-sync-jwhtf\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.585472 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.709773 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.709833 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.862391 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.876880 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 16 12:17:46 crc kubenswrapper[4805]: W1216 12:17:46.988904 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd4d60a4_7c36_4922_bd16_ab9d3ce6b0bd.slice/crio-e68e8cce1b0ee6c3e642968d223f0153ed52134683abb14d15b8cad1949fbd4a WatchSource:0}: Error finding container e68e8cce1b0ee6c3e642968d223f0153ed52134683abb14d15b8cad1949fbd4a: Status 404 returned error can't find the container with id e68e8cce1b0ee6c3e642968d223f0153ed52134683abb14d15b8cad1949fbd4a Dec 16 12:17:46 crc kubenswrapper[4805]: I1216 12:17:46.989691 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b002-account-create-v9rtw"] Dec 16 12:17:47 crc kubenswrapper[4805]: I1216 12:17:47.280826 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jwhtf"] Dec 16 12:17:47 crc kubenswrapper[4805]: W1216 12:17:47.286474 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76f685ad_cc66_4b5a_bc99_f7af5604cbaa.slice/crio-c490f2c97c5a6e40e9eee19dccb20ba6f5677c378c4b93f3fcf772f34dcc1b0e WatchSource:0}: Error finding container c490f2c97c5a6e40e9eee19dccb20ba6f5677c378c4b93f3fcf772f34dcc1b0e: Status 404 returned error can't find the container with id c490f2c97c5a6e40e9eee19dccb20ba6f5677c378c4b93f3fcf772f34dcc1b0e Dec 16 12:17:47 crc kubenswrapper[4805]: I1216 12:17:47.300882 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 16 12:17:47 crc kubenswrapper[4805]: I1216 12:17:47.540995 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b002-account-create-v9rtw" event={"ID":"cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd","Type":"ContainerStarted","Data":"14860bbb92b244a7639360b6504afdff17c24be6a58d0f4367998484d9525903"} Dec 16 12:17:47 crc kubenswrapper[4805]: I1216 12:17:47.541047 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b002-account-create-v9rtw" event={"ID":"cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd","Type":"ContainerStarted","Data":"e68e8cce1b0ee6c3e642968d223f0153ed52134683abb14d15b8cad1949fbd4a"} Dec 16 12:17:47 crc kubenswrapper[4805]: I1216 12:17:47.546255 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jwhtf" event={"ID":"76f685ad-cc66-4b5a-bc99-f7af5604cbaa","Type":"ContainerStarted","Data":"c490f2c97c5a6e40e9eee19dccb20ba6f5677c378c4b93f3fcf772f34dcc1b0e"} Dec 16 12:17:47 crc kubenswrapper[4805]: I1216 12:17:47.546901 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 16 12:17:47 crc kubenswrapper[4805]: I1216 12:17:47.547465 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 16 12:17:47 crc kubenswrapper[4805]: I1216 12:17:47.567577 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-b002-account-create-v9rtw" podStartSLOduration=2.567550416 podStartE2EDuration="2.567550416s" podCreationTimestamp="2025-12-16 12:17:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:17:47.560218085 +0000 UTC m=+1341.278475900" watchObservedRunningTime="2025-12-16 12:17:47.567550416 +0000 UTC m=+1341.285808231" Dec 16 12:17:48 crc kubenswrapper[4805]: I1216 12:17:48.568735 4805 generic.go:334] "Generic (PLEG): container finished" podID="cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd" containerID="14860bbb92b244a7639360b6504afdff17c24be6a58d0f4367998484d9525903" exitCode=0 Dec 16 12:17:48 crc kubenswrapper[4805]: I1216 12:17:48.569123 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b002-account-create-v9rtw" event={"ID":"cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd","Type":"ContainerDied","Data":"14860bbb92b244a7639360b6504afdff17c24be6a58d0f4367998484d9525903"} Dec 16 12:17:48 crc kubenswrapper[4805]: I1216 12:17:48.600198 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-57fbfd7dcc-lq2v9" Dec 16 12:17:48 crc kubenswrapper[4805]: I1216 12:17:48.734424 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59856f68c6-mltb2"] Dec 16 12:17:48 crc kubenswrapper[4805]: I1216 12:17:48.734751 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59856f68c6-mltb2" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-api" containerID="cri-o://e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218" gracePeriod=30 Dec 16 12:17:48 crc kubenswrapper[4805]: I1216 12:17:48.734911 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59856f68c6-mltb2" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-httpd" containerID="cri-o://648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422" gracePeriod=30 Dec 16 12:17:48 crc kubenswrapper[4805]: I1216 12:17:48.765587 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-59856f68c6-mltb2" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.162:9696/\": EOF" Dec 16 12:17:49 crc kubenswrapper[4805]: I1216 12:17:49.584008 4805 generic.go:334] "Generic (PLEG): container finished" podID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerID="648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422" exitCode=0 Dec 16 12:17:49 crc kubenswrapper[4805]: I1216 12:17:49.584260 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59856f68c6-mltb2" event={"ID":"e064fe82-3a6a-490b-a123-38a56ae4234c","Type":"ContainerDied","Data":"648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422"} Dec 16 12:17:49 crc kubenswrapper[4805]: I1216 12:17:49.584588 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:17:49 crc kubenswrapper[4805]: I1216 12:17:49.584603 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:17:50 crc kubenswrapper[4805]: I1216 12:17:50.163577 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b002-account-create-v9rtw" Dec 16 12:17:50 crc kubenswrapper[4805]: I1216 12:17:50.261735 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx2nt\" (UniqueName: \"kubernetes.io/projected/cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd-kube-api-access-qx2nt\") pod \"cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd\" (UID: \"cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd\") " Dec 16 12:17:50 crc kubenswrapper[4805]: I1216 12:17:50.286392 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd-kube-api-access-qx2nt" (OuterVolumeSpecName: "kube-api-access-qx2nt") pod "cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd" (UID: "cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd"). InnerVolumeSpecName "kube-api-access-qx2nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:50 crc kubenswrapper[4805]: I1216 12:17:50.364552 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx2nt\" (UniqueName: \"kubernetes.io/projected/cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd-kube-api-access-qx2nt\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:50 crc kubenswrapper[4805]: I1216 12:17:50.609889 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b002-account-create-v9rtw" event={"ID":"cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd","Type":"ContainerDied","Data":"e68e8cce1b0ee6c3e642968d223f0153ed52134683abb14d15b8cad1949fbd4a"} Dec 16 12:17:50 crc kubenswrapper[4805]: I1216 12:17:50.609941 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e68e8cce1b0ee6c3e642968d223f0153ed52134683abb14d15b8cad1949fbd4a" Dec 16 12:17:50 crc kubenswrapper[4805]: I1216 12:17:50.610015 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b002-account-create-v9rtw" Dec 16 12:17:52 crc kubenswrapper[4805]: I1216 12:17:52.316539 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 16 12:17:52 crc kubenswrapper[4805]: I1216 12:17:52.316738 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.236894 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.329592 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-ovndb-tls-certs\") pod \"e064fe82-3a6a-490b-a123-38a56ae4234c\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.329729 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-config\") pod \"e064fe82-3a6a-490b-a123-38a56ae4234c\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.329757 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-combined-ca-bundle\") pod \"e064fe82-3a6a-490b-a123-38a56ae4234c\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.329788 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-httpd-config\") pod \"e064fe82-3a6a-490b-a123-38a56ae4234c\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.329924 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9mp2\" (UniqueName: \"kubernetes.io/projected/e064fe82-3a6a-490b-a123-38a56ae4234c-kube-api-access-d9mp2\") pod \"e064fe82-3a6a-490b-a123-38a56ae4234c\" (UID: \"e064fe82-3a6a-490b-a123-38a56ae4234c\") " Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.344404 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e064fe82-3a6a-490b-a123-38a56ae4234c-kube-api-access-d9mp2" (OuterVolumeSpecName: "kube-api-access-d9mp2") pod "e064fe82-3a6a-490b-a123-38a56ae4234c" (UID: "e064fe82-3a6a-490b-a123-38a56ae4234c"). InnerVolumeSpecName "kube-api-access-d9mp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.350064 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e064fe82-3a6a-490b-a123-38a56ae4234c" (UID: "e064fe82-3a6a-490b-a123-38a56ae4234c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.448308 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-config" (OuterVolumeSpecName: "config") pod "e064fe82-3a6a-490b-a123-38a56ae4234c" (UID: "e064fe82-3a6a-490b-a123-38a56ae4234c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.450069 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.450096 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9mp2\" (UniqueName: \"kubernetes.io/projected/e064fe82-3a6a-490b-a123-38a56ae4234c-kube-api-access-d9mp2\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.450111 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.470556 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e064fe82-3a6a-490b-a123-38a56ae4234c" (UID: "e064fe82-3a6a-490b-a123-38a56ae4234c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.512278 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e064fe82-3a6a-490b-a123-38a56ae4234c" (UID: "e064fe82-3a6a-490b-a123-38a56ae4234c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.551587 4805 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.551620 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e064fe82-3a6a-490b-a123-38a56ae4234c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.659410 4805 generic.go:334] "Generic (PLEG): container finished" podID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerID="e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218" exitCode=0 Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.659452 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59856f68c6-mltb2" event={"ID":"e064fe82-3a6a-490b-a123-38a56ae4234c","Type":"ContainerDied","Data":"e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218"} Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.659478 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59856f68c6-mltb2" event={"ID":"e064fe82-3a6a-490b-a123-38a56ae4234c","Type":"ContainerDied","Data":"fd7248f5e2c4a484412d9b980d0bc21147a02610a77774edd49804ecf384bf82"} Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.659496 4805 scope.go:117] "RemoveContainer" containerID="648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.659835 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59856f68c6-mltb2" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.698413 4805 scope.go:117] "RemoveContainer" containerID="e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.704870 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.785205 4805 scope.go:117] "RemoveContainer" containerID="648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422" Dec 16 12:17:53 crc kubenswrapper[4805]: E1216 12:17:53.786306 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422\": container with ID starting with 648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422 not found: ID does not exist" containerID="648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.786341 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422"} err="failed to get container status \"648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422\": rpc error: code = NotFound desc = could not find container \"648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422\": container with ID starting with 648d4285b5c3029f9528f51a18b7d3d602306c5207e09b95a1904e14ae85d422 not found: ID does not exist" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.786370 4805 scope.go:117] "RemoveContainer" containerID="e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218" Dec 16 12:17:53 crc kubenswrapper[4805]: E1216 12:17:53.790602 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218\": container with ID starting with e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218 not found: ID does not exist" containerID="e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.790656 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218"} err="failed to get container status \"e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218\": rpc error: code = NotFound desc = could not find container \"e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218\": container with ID starting with e7c793e2199d1363d14ed3c55e552bb64dad1f8a9cb55f80e525ddefd5d36218 not found: ID does not exist" Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.873578 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59856f68c6-mltb2"] Dec 16 12:17:53 crc kubenswrapper[4805]: I1216 12:17:53.907898 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59856f68c6-mltb2"] Dec 16 12:17:54 crc kubenswrapper[4805]: I1216 12:17:54.533610 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" path="/var/lib/kubelet/pods/e064fe82-3a6a-490b-a123-38a56ae4234c/volumes" Dec 16 12:17:56 crc kubenswrapper[4805]: I1216 12:17:56.171936 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:17:56 crc kubenswrapper[4805]: I1216 12:17:56.172535 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="proxy-httpd" containerID="cri-o://2b2eef2c110ce0997bd5c215cc1357b3feadad5e38774e8d422bf6093015ea54" gracePeriod=30 Dec 16 12:17:56 crc kubenswrapper[4805]: I1216 12:17:56.172619 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="sg-core" containerID="cri-o://fad9456ccf1b15a8d62e6161232edfff6d8b243c20d0fbb9d0140d616b516dfb" gracePeriod=30 Dec 16 12:17:56 crc kubenswrapper[4805]: I1216 12:17:56.172702 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="ceilometer-notification-agent" containerID="cri-o://b473ff820f3b14d80210abc2a0a98b73aa376efa98ecb06aec429ead1a6cdf9b" gracePeriod=30 Dec 16 12:17:56 crc kubenswrapper[4805]: I1216 12:17:56.172940 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="ceilometer-central-agent" containerID="cri-o://f1216b131f7c035f899f28871f636750d8927e6ec85e60838764c445d02e407d" gracePeriod=30 Dec 16 12:17:56 crc kubenswrapper[4805]: I1216 12:17:56.197377 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.172:3000/\": EOF" Dec 16 12:17:56 crc kubenswrapper[4805]: I1216 12:17:56.707289 4805 generic.go:334] "Generic (PLEG): container finished" podID="8446402e-7a10-4670-8245-d8d25a3791e6" containerID="fad9456ccf1b15a8d62e6161232edfff6d8b243c20d0fbb9d0140d616b516dfb" exitCode=2 Dec 16 12:17:56 crc kubenswrapper[4805]: I1216 12:17:56.707598 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8446402e-7a10-4670-8245-d8d25a3791e6","Type":"ContainerDied","Data":"fad9456ccf1b15a8d62e6161232edfff6d8b243c20d0fbb9d0140d616b516dfb"} Dec 16 12:17:57 crc kubenswrapper[4805]: I1216 12:17:57.077524 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:17:57 crc kubenswrapper[4805]: I1216 12:17:57.077608 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:17:57 crc kubenswrapper[4805]: I1216 12:17:57.722454 4805 generic.go:334] "Generic (PLEG): container finished" podID="8446402e-7a10-4670-8245-d8d25a3791e6" containerID="2b2eef2c110ce0997bd5c215cc1357b3feadad5e38774e8d422bf6093015ea54" exitCode=0 Dec 16 12:17:57 crc kubenswrapper[4805]: I1216 12:17:57.722485 4805 generic.go:334] "Generic (PLEG): container finished" podID="8446402e-7a10-4670-8245-d8d25a3791e6" containerID="b473ff820f3b14d80210abc2a0a98b73aa376efa98ecb06aec429ead1a6cdf9b" exitCode=0 Dec 16 12:17:57 crc kubenswrapper[4805]: I1216 12:17:57.722493 4805 generic.go:334] "Generic (PLEG): container finished" podID="8446402e-7a10-4670-8245-d8d25a3791e6" containerID="f1216b131f7c035f899f28871f636750d8927e6ec85e60838764c445d02e407d" exitCode=0 Dec 16 12:17:57 crc kubenswrapper[4805]: I1216 12:17:57.722514 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8446402e-7a10-4670-8245-d8d25a3791e6","Type":"ContainerDied","Data":"2b2eef2c110ce0997bd5c215cc1357b3feadad5e38774e8d422bf6093015ea54"} Dec 16 12:17:57 crc kubenswrapper[4805]: I1216 12:17:57.722540 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8446402e-7a10-4670-8245-d8d25a3791e6","Type":"ContainerDied","Data":"b473ff820f3b14d80210abc2a0a98b73aa376efa98ecb06aec429ead1a6cdf9b"} Dec 16 12:17:57 crc kubenswrapper[4805]: I1216 12:17:57.722549 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8446402e-7a10-4670-8245-d8d25a3791e6","Type":"ContainerDied","Data":"f1216b131f7c035f899f28871f636750d8927e6ec85e60838764c445d02e407d"} Dec 16 12:18:02 crc kubenswrapper[4805]: I1216 12:18:02.908093 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.172:3000/\": dial tcp 10.217.0.172:3000: connect: connection refused" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.501177 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.653485 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq57x\" (UniqueName: \"kubernetes.io/projected/8446402e-7a10-4670-8245-d8d25a3791e6-kube-api-access-kq57x\") pod \"8446402e-7a10-4670-8245-d8d25a3791e6\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.653644 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-config-data\") pod \"8446402e-7a10-4670-8245-d8d25a3791e6\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.653712 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-log-httpd\") pod \"8446402e-7a10-4670-8245-d8d25a3791e6\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.653734 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-combined-ca-bundle\") pod \"8446402e-7a10-4670-8245-d8d25a3791e6\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.653764 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-run-httpd\") pod \"8446402e-7a10-4670-8245-d8d25a3791e6\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.653843 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-scripts\") pod \"8446402e-7a10-4670-8245-d8d25a3791e6\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.653883 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-sg-core-conf-yaml\") pod \"8446402e-7a10-4670-8245-d8d25a3791e6\" (UID: \"8446402e-7a10-4670-8245-d8d25a3791e6\") " Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.654469 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8446402e-7a10-4670-8245-d8d25a3791e6" (UID: "8446402e-7a10-4670-8245-d8d25a3791e6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.654812 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8446402e-7a10-4670-8245-d8d25a3791e6" (UID: "8446402e-7a10-4670-8245-d8d25a3791e6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.659294 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-scripts" (OuterVolumeSpecName: "scripts") pod "8446402e-7a10-4670-8245-d8d25a3791e6" (UID: "8446402e-7a10-4670-8245-d8d25a3791e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.659309 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8446402e-7a10-4670-8245-d8d25a3791e6-kube-api-access-kq57x" (OuterVolumeSpecName: "kube-api-access-kq57x") pod "8446402e-7a10-4670-8245-d8d25a3791e6" (UID: "8446402e-7a10-4670-8245-d8d25a3791e6"). InnerVolumeSpecName "kube-api-access-kq57x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.686337 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8446402e-7a10-4670-8245-d8d25a3791e6" (UID: "8446402e-7a10-4670-8245-d8d25a3791e6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.734522 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8446402e-7a10-4670-8245-d8d25a3791e6" (UID: "8446402e-7a10-4670-8245-d8d25a3791e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.755386 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.755853 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.755960 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq57x\" (UniqueName: \"kubernetes.io/projected/8446402e-7a10-4670-8245-d8d25a3791e6-kube-api-access-kq57x\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.756043 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.756110 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.756189 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8446402e-7a10-4670-8245-d8d25a3791e6-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.756157 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-config-data" (OuterVolumeSpecName: "config-data") pod "8446402e-7a10-4670-8245-d8d25a3791e6" (UID: "8446402e-7a10-4670-8245-d8d25a3791e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.790732 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jwhtf" event={"ID":"76f685ad-cc66-4b5a-bc99-f7af5604cbaa","Type":"ContainerStarted","Data":"552b27eb2df70cfeb8a37d30fa8ec0c68f64ff97687871e556c3135f2e670b68"} Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.799388 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8446402e-7a10-4670-8245-d8d25a3791e6","Type":"ContainerDied","Data":"6fd8faf7d263d300a0d783cdbbb8c5a802d27d5ee12187c8aff29d68fc75fa4f"} Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.799446 4805 scope.go:117] "RemoveContainer" containerID="2b2eef2c110ce0997bd5c215cc1357b3feadad5e38774e8d422bf6093015ea54" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.799629 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.812567 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-jwhtf" podStartSLOduration=1.871582495 podStartE2EDuration="17.812548802s" podCreationTimestamp="2025-12-16 12:17:46 +0000 UTC" firstStartedPulling="2025-12-16 12:17:47.289729888 +0000 UTC m=+1341.007987693" lastFinishedPulling="2025-12-16 12:18:03.230696195 +0000 UTC m=+1356.948954000" observedRunningTime="2025-12-16 12:18:03.81074304 +0000 UTC m=+1357.529000845" watchObservedRunningTime="2025-12-16 12:18:03.812548802 +0000 UTC m=+1357.530806627" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.827318 4805 scope.go:117] "RemoveContainer" containerID="fad9456ccf1b15a8d62e6161232edfff6d8b243c20d0fbb9d0140d616b516dfb" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.851852 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.858251 4805 scope.go:117] "RemoveContainer" containerID="b473ff820f3b14d80210abc2a0a98b73aa376efa98ecb06aec429ead1a6cdf9b" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.866289 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8446402e-7a10-4670-8245-d8d25a3791e6-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.879527 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.885223 4805 scope.go:117] "RemoveContainer" containerID="f1216b131f7c035f899f28871f636750d8927e6ec85e60838764c445d02e407d" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.898767 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:03 crc kubenswrapper[4805]: E1216 12:18:03.899239 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd" containerName="mariadb-account-create" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899251 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd" containerName="mariadb-account-create" Dec 16 12:18:03 crc kubenswrapper[4805]: E1216 12:18:03.899265 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-httpd" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899271 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-httpd" Dec 16 12:18:03 crc kubenswrapper[4805]: E1216 12:18:03.899292 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="ceilometer-notification-agent" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899298 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="ceilometer-notification-agent" Dec 16 12:18:03 crc kubenswrapper[4805]: E1216 12:18:03.899306 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="sg-core" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899313 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="sg-core" Dec 16 12:18:03 crc kubenswrapper[4805]: E1216 12:18:03.899324 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="proxy-httpd" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899330 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="proxy-httpd" Dec 16 12:18:03 crc kubenswrapper[4805]: E1216 12:18:03.899345 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="ceilometer-central-agent" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899350 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="ceilometer-central-agent" Dec 16 12:18:03 crc kubenswrapper[4805]: E1216 12:18:03.899366 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-api" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899371 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-api" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899552 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="sg-core" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899561 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="proxy-httpd" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899571 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="ceilometer-notification-agent" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899583 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-httpd" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899591 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e064fe82-3a6a-490b-a123-38a56ae4234c" containerName="neutron-api" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899606 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd" containerName="mariadb-account-create" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.899616 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" containerName="ceilometer-central-agent" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.901280 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.908969 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.911787 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.920138 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.968050 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.968537 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.968728 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-config-data\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.968844 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.968966 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t55r7\" (UniqueName: \"kubernetes.io/projected/5b3e2534-dc82-41fc-8275-ce20011033e2-kube-api-access-t55r7\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.969114 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:03 crc kubenswrapper[4805]: I1216 12:18:03.969609 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-scripts\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.071428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.071485 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-scripts\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.071588 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.071678 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.071766 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-config-data\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.071791 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.071816 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t55r7\" (UniqueName: \"kubernetes.io/projected/5b3e2534-dc82-41fc-8275-ce20011033e2-kube-api-access-t55r7\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.071913 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.072403 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.077588 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.085876 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.086071 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-scripts\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.088466 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t55r7\" (UniqueName: \"kubernetes.io/projected/5b3e2534-dc82-41fc-8275-ce20011033e2-kube-api-access-t55r7\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.088808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-config-data\") pod \"ceilometer-0\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.236897 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.534519 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8446402e-7a10-4670-8245-d8d25a3791e6" path="/var/lib/kubelet/pods/8446402e-7a10-4670-8245-d8d25a3791e6/volumes" Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.703024 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:04 crc kubenswrapper[4805]: I1216 12:18:04.811283 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3e2534-dc82-41fc-8275-ce20011033e2","Type":"ContainerStarted","Data":"14bedd925a0d56dcfe029493cdda596e0543395396730829c9537bcbf2c15624"} Dec 16 12:18:05 crc kubenswrapper[4805]: I1216 12:18:05.823368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3e2534-dc82-41fc-8275-ce20011033e2","Type":"ContainerStarted","Data":"6bde11d7e301a9031d9f6e8f4079ebaffc679e8dc0d5e76b5b0bc009f228fc0d"} Dec 16 12:18:07 crc kubenswrapper[4805]: I1216 12:18:07.854492 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3e2534-dc82-41fc-8275-ce20011033e2","Type":"ContainerStarted","Data":"edd9c149d6a73242e44e5ba6001fa7350adeb59171f1dcb0e61fa736931b1391"} Dec 16 12:18:08 crc kubenswrapper[4805]: I1216 12:18:08.868738 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3e2534-dc82-41fc-8275-ce20011033e2","Type":"ContainerStarted","Data":"6ca19dffc4a8d7cd61fa54e9525180f4d6ac783867c57078fc49c09720f83bf1"} Dec 16 12:18:09 crc kubenswrapper[4805]: I1216 12:18:09.881094 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3e2534-dc82-41fc-8275-ce20011033e2","Type":"ContainerStarted","Data":"8922417039b305b2787f7e1afaea1cbcecc316baa808de8d7e137fbd6d05af70"} Dec 16 12:18:09 crc kubenswrapper[4805]: I1216 12:18:09.881454 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 16 12:18:09 crc kubenswrapper[4805]: I1216 12:18:09.906207 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.394447091 podStartE2EDuration="6.906179963s" podCreationTimestamp="2025-12-16 12:18:03 +0000 UTC" firstStartedPulling="2025-12-16 12:18:04.708715134 +0000 UTC m=+1358.426972939" lastFinishedPulling="2025-12-16 12:18:09.220448006 +0000 UTC m=+1362.938705811" observedRunningTime="2025-12-16 12:18:09.901957022 +0000 UTC m=+1363.620214837" watchObservedRunningTime="2025-12-16 12:18:09.906179963 +0000 UTC m=+1363.624437768" Dec 16 12:18:11 crc kubenswrapper[4805]: I1216 12:18:11.642366 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:11 crc kubenswrapper[4805]: I1216 12:18:11.906390 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="ceilometer-central-agent" containerID="cri-o://6bde11d7e301a9031d9f6e8f4079ebaffc679e8dc0d5e76b5b0bc009f228fc0d" gracePeriod=30 Dec 16 12:18:11 crc kubenswrapper[4805]: I1216 12:18:11.906445 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="sg-core" containerID="cri-o://6ca19dffc4a8d7cd61fa54e9525180f4d6ac783867c57078fc49c09720f83bf1" gracePeriod=30 Dec 16 12:18:11 crc kubenswrapper[4805]: I1216 12:18:11.906467 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="proxy-httpd" containerID="cri-o://8922417039b305b2787f7e1afaea1cbcecc316baa808de8d7e137fbd6d05af70" gracePeriod=30 Dec 16 12:18:11 crc kubenswrapper[4805]: I1216 12:18:11.906483 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="ceilometer-notification-agent" containerID="cri-o://edd9c149d6a73242e44e5ba6001fa7350adeb59171f1dcb0e61fa736931b1391" gracePeriod=30 Dec 16 12:18:12 crc kubenswrapper[4805]: I1216 12:18:12.917395 4805 generic.go:334] "Generic (PLEG): container finished" podID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerID="8922417039b305b2787f7e1afaea1cbcecc316baa808de8d7e137fbd6d05af70" exitCode=0 Dec 16 12:18:12 crc kubenswrapper[4805]: I1216 12:18:12.917692 4805 generic.go:334] "Generic (PLEG): container finished" podID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerID="6ca19dffc4a8d7cd61fa54e9525180f4d6ac783867c57078fc49c09720f83bf1" exitCode=2 Dec 16 12:18:12 crc kubenswrapper[4805]: I1216 12:18:12.917703 4805 generic.go:334] "Generic (PLEG): container finished" podID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerID="edd9c149d6a73242e44e5ba6001fa7350adeb59171f1dcb0e61fa736931b1391" exitCode=0 Dec 16 12:18:12 crc kubenswrapper[4805]: I1216 12:18:12.917476 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3e2534-dc82-41fc-8275-ce20011033e2","Type":"ContainerDied","Data":"8922417039b305b2787f7e1afaea1cbcecc316baa808de8d7e137fbd6d05af70"} Dec 16 12:18:12 crc kubenswrapper[4805]: I1216 12:18:12.917756 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3e2534-dc82-41fc-8275-ce20011033e2","Type":"ContainerDied","Data":"6ca19dffc4a8d7cd61fa54e9525180f4d6ac783867c57078fc49c09720f83bf1"} Dec 16 12:18:12 crc kubenswrapper[4805]: I1216 12:18:12.917772 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3e2534-dc82-41fc-8275-ce20011033e2","Type":"ContainerDied","Data":"edd9c149d6a73242e44e5ba6001fa7350adeb59171f1dcb0e61fa736931b1391"} Dec 16 12:18:15 crc kubenswrapper[4805]: I1216 12:18:15.950178 4805 generic.go:334] "Generic (PLEG): container finished" podID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerID="6bde11d7e301a9031d9f6e8f4079ebaffc679e8dc0d5e76b5b0bc009f228fc0d" exitCode=0 Dec 16 12:18:15 crc kubenswrapper[4805]: I1216 12:18:15.950280 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3e2534-dc82-41fc-8275-ce20011033e2","Type":"ContainerDied","Data":"6bde11d7e301a9031d9f6e8f4079ebaffc679e8dc0d5e76b5b0bc009f228fc0d"} Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.508699 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.624080 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-scripts\") pod \"5b3e2534-dc82-41fc-8275-ce20011033e2\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.624139 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-sg-core-conf-yaml\") pod \"5b3e2534-dc82-41fc-8275-ce20011033e2\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.624237 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-run-httpd\") pod \"5b3e2534-dc82-41fc-8275-ce20011033e2\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.624328 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t55r7\" (UniqueName: \"kubernetes.io/projected/5b3e2534-dc82-41fc-8275-ce20011033e2-kube-api-access-t55r7\") pod \"5b3e2534-dc82-41fc-8275-ce20011033e2\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.624409 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-log-httpd\") pod \"5b3e2534-dc82-41fc-8275-ce20011033e2\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.624458 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-combined-ca-bundle\") pod \"5b3e2534-dc82-41fc-8275-ce20011033e2\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.624510 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-config-data\") pod \"5b3e2534-dc82-41fc-8275-ce20011033e2\" (UID: \"5b3e2534-dc82-41fc-8275-ce20011033e2\") " Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.624773 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5b3e2534-dc82-41fc-8275-ce20011033e2" (UID: "5b3e2534-dc82-41fc-8275-ce20011033e2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.624900 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5b3e2534-dc82-41fc-8275-ce20011033e2" (UID: "5b3e2534-dc82-41fc-8275-ce20011033e2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.625410 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.625431 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2534-dc82-41fc-8275-ce20011033e2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.632298 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-scripts" (OuterVolumeSpecName: "scripts") pod "5b3e2534-dc82-41fc-8275-ce20011033e2" (UID: "5b3e2534-dc82-41fc-8275-ce20011033e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.639414 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b3e2534-dc82-41fc-8275-ce20011033e2-kube-api-access-t55r7" (OuterVolumeSpecName: "kube-api-access-t55r7") pod "5b3e2534-dc82-41fc-8275-ce20011033e2" (UID: "5b3e2534-dc82-41fc-8275-ce20011033e2"). InnerVolumeSpecName "kube-api-access-t55r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.672017 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5b3e2534-dc82-41fc-8275-ce20011033e2" (UID: "5b3e2534-dc82-41fc-8275-ce20011033e2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.716582 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b3e2534-dc82-41fc-8275-ce20011033e2" (UID: "5b3e2534-dc82-41fc-8275-ce20011033e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.727563 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t55r7\" (UniqueName: \"kubernetes.io/projected/5b3e2534-dc82-41fc-8275-ce20011033e2-kube-api-access-t55r7\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.727615 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.727628 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.727639 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.729961 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-config-data" (OuterVolumeSpecName: "config-data") pod "5b3e2534-dc82-41fc-8275-ce20011033e2" (UID: "5b3e2534-dc82-41fc-8275-ce20011033e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.829686 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3e2534-dc82-41fc-8275-ce20011033e2-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.961375 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3e2534-dc82-41fc-8275-ce20011033e2","Type":"ContainerDied","Data":"14bedd925a0d56dcfe029493cdda596e0543395396730829c9537bcbf2c15624"} Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.961660 4805 scope.go:117] "RemoveContainer" containerID="8922417039b305b2787f7e1afaea1cbcecc316baa808de8d7e137fbd6d05af70" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.961778 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:16 crc kubenswrapper[4805]: I1216 12:18:16.990375 4805 scope.go:117] "RemoveContainer" containerID="6ca19dffc4a8d7cd61fa54e9525180f4d6ac783867c57078fc49c09720f83bf1" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.002149 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.013996 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.014891 4805 scope.go:117] "RemoveContainer" containerID="edd9c149d6a73242e44e5ba6001fa7350adeb59171f1dcb0e61fa736931b1391" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.040856 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:17 crc kubenswrapper[4805]: E1216 12:18:17.042520 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="sg-core" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.042684 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="sg-core" Dec 16 12:18:17 crc kubenswrapper[4805]: E1216 12:18:17.042852 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="ceilometer-notification-agent" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.042972 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="ceilometer-notification-agent" Dec 16 12:18:17 crc kubenswrapper[4805]: E1216 12:18:17.043067 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="ceilometer-central-agent" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.043529 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="ceilometer-central-agent" Dec 16 12:18:17 crc kubenswrapper[4805]: E1216 12:18:17.043665 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="proxy-httpd" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.043749 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="proxy-httpd" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.044224 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="sg-core" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.044345 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="ceilometer-notification-agent" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.044482 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="proxy-httpd" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.044706 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" containerName="ceilometer-central-agent" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.048199 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.048455 4805 scope.go:117] "RemoveContainer" containerID="6bde11d7e301a9031d9f6e8f4079ebaffc679e8dc0d5e76b5b0bc009f228fc0d" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.052928 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.055103 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.062789 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.135243 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.135285 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.135343 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-log-httpd\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.135390 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-config-data\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.135513 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h75n5\" (UniqueName: \"kubernetes.io/projected/3353e601-87dd-4843-b644-dcc50890f303-kube-api-access-h75n5\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.135581 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-run-httpd\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.135712 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-scripts\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.237511 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.237785 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.237966 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-log-httpd\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.238098 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-config-data\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.238248 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h75n5\" (UniqueName: \"kubernetes.io/projected/3353e601-87dd-4843-b644-dcc50890f303-kube-api-access-h75n5\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.238368 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-run-httpd\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.238505 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-scripts\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.238650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-run-httpd\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.238402 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-log-httpd\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.243138 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-config-data\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.243293 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.244766 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.245510 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-scripts\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.268121 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h75n5\" (UniqueName: \"kubernetes.io/projected/3353e601-87dd-4843-b644-dcc50890f303-kube-api-access-h75n5\") pod \"ceilometer-0\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.369987 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.825564 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:17 crc kubenswrapper[4805]: I1216 12:18:17.971956 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3353e601-87dd-4843-b644-dcc50890f303","Type":"ContainerStarted","Data":"add946fbc7addfe32106bc56801aa5dfdf34c9fd110aba65f8d82496432659a2"} Dec 16 12:18:18 crc kubenswrapper[4805]: I1216 12:18:18.546423 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b3e2534-dc82-41fc-8275-ce20011033e2" path="/var/lib/kubelet/pods/5b3e2534-dc82-41fc-8275-ce20011033e2/volumes" Dec 16 12:18:18 crc kubenswrapper[4805]: I1216 12:18:18.981740 4805 generic.go:334] "Generic (PLEG): container finished" podID="76f685ad-cc66-4b5a-bc99-f7af5604cbaa" containerID="552b27eb2df70cfeb8a37d30fa8ec0c68f64ff97687871e556c3135f2e670b68" exitCode=0 Dec 16 12:18:18 crc kubenswrapper[4805]: I1216 12:18:18.981826 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jwhtf" event={"ID":"76f685ad-cc66-4b5a-bc99-f7af5604cbaa","Type":"ContainerDied","Data":"552b27eb2df70cfeb8a37d30fa8ec0c68f64ff97687871e556c3135f2e670b68"} Dec 16 12:18:18 crc kubenswrapper[4805]: I1216 12:18:18.984351 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3353e601-87dd-4843-b644-dcc50890f303","Type":"ContainerStarted","Data":"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407"} Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.360647 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.521942 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7t5h\" (UniqueName: \"kubernetes.io/projected/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-kube-api-access-k7t5h\") pod \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.522062 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-config-data\") pod \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.522233 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-combined-ca-bundle\") pod \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.522672 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-scripts\") pod \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\" (UID: \"76f685ad-cc66-4b5a-bc99-f7af5604cbaa\") " Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.526853 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-kube-api-access-k7t5h" (OuterVolumeSpecName: "kube-api-access-k7t5h") pod "76f685ad-cc66-4b5a-bc99-f7af5604cbaa" (UID: "76f685ad-cc66-4b5a-bc99-f7af5604cbaa"). InnerVolumeSpecName "kube-api-access-k7t5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.528394 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-scripts" (OuterVolumeSpecName: "scripts") pod "76f685ad-cc66-4b5a-bc99-f7af5604cbaa" (UID: "76f685ad-cc66-4b5a-bc99-f7af5604cbaa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.550207 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76f685ad-cc66-4b5a-bc99-f7af5604cbaa" (UID: "76f685ad-cc66-4b5a-bc99-f7af5604cbaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.558337 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-config-data" (OuterVolumeSpecName: "config-data") pod "76f685ad-cc66-4b5a-bc99-f7af5604cbaa" (UID: "76f685ad-cc66-4b5a-bc99-f7af5604cbaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.624566 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.624592 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.624601 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7t5h\" (UniqueName: \"kubernetes.io/projected/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-kube-api-access-k7t5h\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:20 crc kubenswrapper[4805]: I1216 12:18:20.624612 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f685ad-cc66-4b5a-bc99-f7af5604cbaa-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.003991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3353e601-87dd-4843-b644-dcc50890f303","Type":"ContainerStarted","Data":"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357"} Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.005876 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jwhtf" event={"ID":"76f685ad-cc66-4b5a-bc99-f7af5604cbaa","Type":"ContainerDied","Data":"c490f2c97c5a6e40e9eee19dccb20ba6f5677c378c4b93f3fcf772f34dcc1b0e"} Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.005915 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c490f2c97c5a6e40e9eee19dccb20ba6f5677c378c4b93f3fcf772f34dcc1b0e" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.006166 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jwhtf" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.127414 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 16 12:18:21 crc kubenswrapper[4805]: E1216 12:18:21.128544 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f685ad-cc66-4b5a-bc99-f7af5604cbaa" containerName="nova-cell0-conductor-db-sync" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.128659 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f685ad-cc66-4b5a-bc99-f7af5604cbaa" containerName="nova-cell0-conductor-db-sync" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.128973 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="76f685ad-cc66-4b5a-bc99-f7af5604cbaa" containerName="nova-cell0-conductor-db-sync" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.129930 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.143307 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xdpdc" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.143861 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.149538 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.234949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4qbs\" (UniqueName: \"kubernetes.io/projected/0f19678d-cbc8-49f7-ad36-5b4886220730-kube-api-access-j4qbs\") pod \"nova-cell0-conductor-0\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.235101 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.235172 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.337531 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qbs\" (UniqueName: \"kubernetes.io/projected/0f19678d-cbc8-49f7-ad36-5b4886220730-kube-api-access-j4qbs\") pod \"nova-cell0-conductor-0\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.337657 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.337702 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.343941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.344560 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.376168 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qbs\" (UniqueName: \"kubernetes.io/projected/0f19678d-cbc8-49f7-ad36-5b4886220730-kube-api-access-j4qbs\") pod \"nova-cell0-conductor-0\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:21 crc kubenswrapper[4805]: I1216 12:18:21.475496 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:22 crc kubenswrapper[4805]: I1216 12:18:22.244037 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 16 12:18:22 crc kubenswrapper[4805]: I1216 12:18:22.290514 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 16 12:18:23 crc kubenswrapper[4805]: I1216 12:18:23.025397 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0f19678d-cbc8-49f7-ad36-5b4886220730","Type":"ContainerStarted","Data":"7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e"} Dec 16 12:18:23 crc kubenswrapper[4805]: I1216 12:18:23.025659 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0f19678d-cbc8-49f7-ad36-5b4886220730","Type":"ContainerStarted","Data":"c4a8a2c8426f4af981f62fdba66f1d4c9d2f2e8995d0b65efaeaa7c7618c8b9c"} Dec 16 12:18:23 crc kubenswrapper[4805]: I1216 12:18:23.025794 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="0f19678d-cbc8-49f7-ad36-5b4886220730" containerName="nova-cell0-conductor-conductor" containerID="cri-o://7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" gracePeriod=30 Dec 16 12:18:23 crc kubenswrapper[4805]: I1216 12:18:23.026091 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:23 crc kubenswrapper[4805]: I1216 12:18:23.031748 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3353e601-87dd-4843-b644-dcc50890f303","Type":"ContainerStarted","Data":"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6"} Dec 16 12:18:25 crc kubenswrapper[4805]: I1216 12:18:25.029008 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=4.028986997 podStartE2EDuration="4.028986997s" podCreationTimestamp="2025-12-16 12:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:18:23.047940013 +0000 UTC m=+1376.766197818" watchObservedRunningTime="2025-12-16 12:18:25.028986997 +0000 UTC m=+1378.747244812" Dec 16 12:18:25 crc kubenswrapper[4805]: I1216 12:18:25.031090 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:25 crc kubenswrapper[4805]: I1216 12:18:25.055202 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3353e601-87dd-4843-b644-dcc50890f303","Type":"ContainerStarted","Data":"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44"} Dec 16 12:18:25 crc kubenswrapper[4805]: I1216 12:18:25.055707 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 16 12:18:25 crc kubenswrapper[4805]: I1216 12:18:25.080765 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.970376135 podStartE2EDuration="8.080742601s" podCreationTimestamp="2025-12-16 12:18:17 +0000 UTC" firstStartedPulling="2025-12-16 12:18:17.829033654 +0000 UTC m=+1371.547291469" lastFinishedPulling="2025-12-16 12:18:23.93940013 +0000 UTC m=+1377.657657935" observedRunningTime="2025-12-16 12:18:25.073774321 +0000 UTC m=+1378.792032126" watchObservedRunningTime="2025-12-16 12:18:25.080742601 +0000 UTC m=+1378.799000416" Dec 16 12:18:26 crc kubenswrapper[4805]: I1216 12:18:26.063723 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="ceilometer-central-agent" containerID="cri-o://46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407" gracePeriod=30 Dec 16 12:18:26 crc kubenswrapper[4805]: I1216 12:18:26.063865 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="proxy-httpd" containerID="cri-o://f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44" gracePeriod=30 Dec 16 12:18:26 crc kubenswrapper[4805]: I1216 12:18:26.063909 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="sg-core" containerID="cri-o://9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6" gracePeriod=30 Dec 16 12:18:26 crc kubenswrapper[4805]: I1216 12:18:26.063942 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="ceilometer-notification-agent" containerID="cri-o://1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357" gracePeriod=30 Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.017680 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.072721 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.073544 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.092943 4805 generic.go:334] "Generic (PLEG): container finished" podID="3353e601-87dd-4843-b644-dcc50890f303" containerID="f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44" exitCode=0 Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.093280 4805 generic.go:334] "Generic (PLEG): container finished" podID="3353e601-87dd-4843-b644-dcc50890f303" containerID="9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6" exitCode=2 Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.093389 4805 generic.go:334] "Generic (PLEG): container finished" podID="3353e601-87dd-4843-b644-dcc50890f303" containerID="1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357" exitCode=0 Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.093485 4805 generic.go:334] "Generic (PLEG): container finished" podID="3353e601-87dd-4843-b644-dcc50890f303" containerID="46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407" exitCode=0 Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.093577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3353e601-87dd-4843-b644-dcc50890f303","Type":"ContainerDied","Data":"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44"} Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.093696 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3353e601-87dd-4843-b644-dcc50890f303","Type":"ContainerDied","Data":"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6"} Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.093789 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3353e601-87dd-4843-b644-dcc50890f303","Type":"ContainerDied","Data":"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357"} Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.093929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3353e601-87dd-4843-b644-dcc50890f303","Type":"ContainerDied","Data":"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407"} Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.094029 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3353e601-87dd-4843-b644-dcc50890f303","Type":"ContainerDied","Data":"add946fbc7addfe32106bc56801aa5dfdf34c9fd110aba65f8d82496432659a2"} Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.094121 4805 scope.go:117] "RemoveContainer" containerID="f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.094293 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.118707 4805 scope.go:117] "RemoveContainer" containerID="9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.141892 4805 scope.go:117] "RemoveContainer" containerID="1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.166506 4805 scope.go:117] "RemoveContainer" containerID="46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.186128 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h75n5\" (UniqueName: \"kubernetes.io/projected/3353e601-87dd-4843-b644-dcc50890f303-kube-api-access-h75n5\") pod \"3353e601-87dd-4843-b644-dcc50890f303\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.186201 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-combined-ca-bundle\") pod \"3353e601-87dd-4843-b644-dcc50890f303\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.186249 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-config-data\") pod \"3353e601-87dd-4843-b644-dcc50890f303\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.186289 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-sg-core-conf-yaml\") pod \"3353e601-87dd-4843-b644-dcc50890f303\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.186341 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-log-httpd\") pod \"3353e601-87dd-4843-b644-dcc50890f303\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.186366 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-run-httpd\") pod \"3353e601-87dd-4843-b644-dcc50890f303\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.186390 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-scripts\") pod \"3353e601-87dd-4843-b644-dcc50890f303\" (UID: \"3353e601-87dd-4843-b644-dcc50890f303\") " Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.189511 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3353e601-87dd-4843-b644-dcc50890f303" (UID: "3353e601-87dd-4843-b644-dcc50890f303"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.190810 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3353e601-87dd-4843-b644-dcc50890f303" (UID: "3353e601-87dd-4843-b644-dcc50890f303"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.193726 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3353e601-87dd-4843-b644-dcc50890f303-kube-api-access-h75n5" (OuterVolumeSpecName: "kube-api-access-h75n5") pod "3353e601-87dd-4843-b644-dcc50890f303" (UID: "3353e601-87dd-4843-b644-dcc50890f303"). InnerVolumeSpecName "kube-api-access-h75n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.196188 4805 scope.go:117] "RemoveContainer" containerID="f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.199268 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-scripts" (OuterVolumeSpecName: "scripts") pod "3353e601-87dd-4843-b644-dcc50890f303" (UID: "3353e601-87dd-4843-b644-dcc50890f303"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:27 crc kubenswrapper[4805]: E1216 12:18:27.199468 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44\": container with ID starting with f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44 not found: ID does not exist" containerID="f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.199556 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44"} err="failed to get container status \"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44\": rpc error: code = NotFound desc = could not find container \"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44\": container with ID starting with f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.199645 4805 scope.go:117] "RemoveContainer" containerID="9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6" Dec 16 12:18:27 crc kubenswrapper[4805]: E1216 12:18:27.200268 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6\": container with ID starting with 9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6 not found: ID does not exist" containerID="9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.200374 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6"} err="failed to get container status \"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6\": rpc error: code = NotFound desc = could not find container \"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6\": container with ID starting with 9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.200453 4805 scope.go:117] "RemoveContainer" containerID="1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357" Dec 16 12:18:27 crc kubenswrapper[4805]: E1216 12:18:27.205299 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357\": container with ID starting with 1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357 not found: ID does not exist" containerID="1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.205473 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357"} err="failed to get container status \"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357\": rpc error: code = NotFound desc = could not find container \"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357\": container with ID starting with 1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.205551 4805 scope.go:117] "RemoveContainer" containerID="46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407" Dec 16 12:18:27 crc kubenswrapper[4805]: E1216 12:18:27.208267 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407\": container with ID starting with 46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407 not found: ID does not exist" containerID="46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.208416 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407"} err="failed to get container status \"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407\": rpc error: code = NotFound desc = could not find container \"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407\": container with ID starting with 46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.208504 4805 scope.go:117] "RemoveContainer" containerID="f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.208899 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44"} err="failed to get container status \"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44\": rpc error: code = NotFound desc = could not find container \"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44\": container with ID starting with f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.208951 4805 scope.go:117] "RemoveContainer" containerID="9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.209261 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6"} err="failed to get container status \"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6\": rpc error: code = NotFound desc = could not find container \"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6\": container with ID starting with 9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.209285 4805 scope.go:117] "RemoveContainer" containerID="1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.209489 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357"} err="failed to get container status \"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357\": rpc error: code = NotFound desc = could not find container \"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357\": container with ID starting with 1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.209508 4805 scope.go:117] "RemoveContainer" containerID="46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.209738 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407"} err="failed to get container status \"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407\": rpc error: code = NotFound desc = could not find container \"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407\": container with ID starting with 46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.209770 4805 scope.go:117] "RemoveContainer" containerID="f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.209976 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44"} err="failed to get container status \"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44\": rpc error: code = NotFound desc = could not find container \"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44\": container with ID starting with f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.209996 4805 scope.go:117] "RemoveContainer" containerID="9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.210191 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6"} err="failed to get container status \"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6\": rpc error: code = NotFound desc = could not find container \"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6\": container with ID starting with 9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.210209 4805 scope.go:117] "RemoveContainer" containerID="1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.210457 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357"} err="failed to get container status \"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357\": rpc error: code = NotFound desc = could not find container \"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357\": container with ID starting with 1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.210486 4805 scope.go:117] "RemoveContainer" containerID="46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.210666 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407"} err="failed to get container status \"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407\": rpc error: code = NotFound desc = could not find container \"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407\": container with ID starting with 46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.210685 4805 scope.go:117] "RemoveContainer" containerID="f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.210885 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44"} err="failed to get container status \"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44\": rpc error: code = NotFound desc = could not find container \"f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44\": container with ID starting with f180b54be0154efcd7ca55a6ab4b75bf0005bc0f70bc416386fb873f012b7b44 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.210921 4805 scope.go:117] "RemoveContainer" containerID="9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.211121 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6"} err="failed to get container status \"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6\": rpc error: code = NotFound desc = could not find container \"9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6\": container with ID starting with 9320f3150de8efbd4cf2d4ed5a84cd529a29663203a44bb8f20444aed9196fe6 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.211212 4805 scope.go:117] "RemoveContainer" containerID="1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.211415 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357"} err="failed to get container status \"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357\": rpc error: code = NotFound desc = could not find container \"1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357\": container with ID starting with 1b32639eccb1e98433571236bf663e3921af5a00df7c2fa9e2f60102ff620357 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.211439 4805 scope.go:117] "RemoveContainer" containerID="46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.211654 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407"} err="failed to get container status \"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407\": rpc error: code = NotFound desc = could not find container \"46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407\": container with ID starting with 46a638efa48ebf429005db398406af491b8d517c26c01ed280f99da97ceb3407 not found: ID does not exist" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.225804 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3353e601-87dd-4843-b644-dcc50890f303" (UID: "3353e601-87dd-4843-b644-dcc50890f303"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.288795 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h75n5\" (UniqueName: \"kubernetes.io/projected/3353e601-87dd-4843-b644-dcc50890f303-kube-api-access-h75n5\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.288832 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.288845 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.288854 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3353e601-87dd-4843-b644-dcc50890f303-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.288863 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.342756 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3353e601-87dd-4843-b644-dcc50890f303" (UID: "3353e601-87dd-4843-b644-dcc50890f303"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.362949 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dc66n"] Dec 16 12:18:27 crc kubenswrapper[4805]: E1216 12:18:27.363647 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="ceilometer-central-agent" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.363751 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="ceilometer-central-agent" Dec 16 12:18:27 crc kubenswrapper[4805]: E1216 12:18:27.363842 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="proxy-httpd" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.363911 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="proxy-httpd" Dec 16 12:18:27 crc kubenswrapper[4805]: E1216 12:18:27.364029 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="ceilometer-notification-agent" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.364105 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="ceilometer-notification-agent" Dec 16 12:18:27 crc kubenswrapper[4805]: E1216 12:18:27.364211 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="sg-core" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.364391 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="sg-core" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.364673 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="ceilometer-notification-agent" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.364786 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="sg-core" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.364880 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="ceilometer-central-agent" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.364962 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3353e601-87dd-4843-b644-dcc50890f303" containerName="proxy-httpd" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.366719 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.396531 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-utilities\") pod \"redhat-operators-dc66n\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.398753 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dc66n"] Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.403779 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-catalog-content\") pod \"redhat-operators-dc66n\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.403971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc276\" (UniqueName: \"kubernetes.io/projected/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-kube-api-access-sc276\") pod \"redhat-operators-dc66n\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.404903 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.420285 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-config-data" (OuterVolumeSpecName: "config-data") pod "3353e601-87dd-4843-b644-dcc50890f303" (UID: "3353e601-87dd-4843-b644-dcc50890f303"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.505964 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-catalog-content\") pod \"redhat-operators-dc66n\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.506330 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc276\" (UniqueName: \"kubernetes.io/projected/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-kube-api-access-sc276\") pod \"redhat-operators-dc66n\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.506469 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-utilities\") pod \"redhat-operators-dc66n\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.506588 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3353e601-87dd-4843-b644-dcc50890f303-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.507076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-catalog-content\") pod \"redhat-operators-dc66n\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.508323 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-utilities\") pod \"redhat-operators-dc66n\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.524540 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc276\" (UniqueName: \"kubernetes.io/projected/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-kube-api-access-sc276\") pod \"redhat-operators-dc66n\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.735502 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.735925 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.748775 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.776691 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.782766 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.786796 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.788808 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.789675 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.810452 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrx6b\" (UniqueName: \"kubernetes.io/projected/defc1f03-2277-4472-9ca8-2cd1c904cbd2-kube-api-access-lrx6b\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.810771 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.810887 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-scripts\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.811006 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-run-httpd\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.811123 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-config-data\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.812321 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-log-httpd\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.813489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.915659 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-run-httpd\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.915728 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-config-data\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.915826 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-log-httpd\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.915855 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.915895 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrx6b\" (UniqueName: \"kubernetes.io/projected/defc1f03-2277-4472-9ca8-2cd1c904cbd2-kube-api-access-lrx6b\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.915945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.915967 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-scripts\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.916450 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-run-httpd\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.919590 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-log-httpd\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.919797 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.920380 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-scripts\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.923783 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.927023 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-config-data\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:27 crc kubenswrapper[4805]: I1216 12:18:27.936264 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrx6b\" (UniqueName: \"kubernetes.io/projected/defc1f03-2277-4472-9ca8-2cd1c904cbd2-kube-api-access-lrx6b\") pod \"ceilometer-0\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " pod="openstack/ceilometer-0" Dec 16 12:18:28 crc kubenswrapper[4805]: I1216 12:18:28.113602 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:28 crc kubenswrapper[4805]: I1216 12:18:28.280923 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dc66n"] Dec 16 12:18:28 crc kubenswrapper[4805]: I1216 12:18:28.535228 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3353e601-87dd-4843-b644-dcc50890f303" path="/var/lib/kubelet/pods/3353e601-87dd-4843-b644-dcc50890f303/volumes" Dec 16 12:18:28 crc kubenswrapper[4805]: W1216 12:18:28.673513 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddefc1f03_2277_4472_9ca8_2cd1c904cbd2.slice/crio-9dbf85e83202890a18c9e95c25551296ad964416cf6380bc817cf6db929b1963 WatchSource:0}: Error finding container 9dbf85e83202890a18c9e95c25551296ad964416cf6380bc817cf6db929b1963: Status 404 returned error can't find the container with id 9dbf85e83202890a18c9e95c25551296ad964416cf6380bc817cf6db929b1963 Dec 16 12:18:28 crc kubenswrapper[4805]: I1216 12:18:28.688612 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:29 crc kubenswrapper[4805]: I1216 12:18:29.117740 4805 generic.go:334] "Generic (PLEG): container finished" podID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerID="733ad8f5c765ea8c3149206f04c8ed874b979fc8efe82b3315d457585de4bd42" exitCode=0 Dec 16 12:18:29 crc kubenswrapper[4805]: I1216 12:18:29.117814 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc66n" event={"ID":"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa","Type":"ContainerDied","Data":"733ad8f5c765ea8c3149206f04c8ed874b979fc8efe82b3315d457585de4bd42"} Dec 16 12:18:29 crc kubenswrapper[4805]: I1216 12:18:29.117840 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc66n" event={"ID":"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa","Type":"ContainerStarted","Data":"a3841dd1d9c2192f95aa6163e918757f2799488d18dfc988d585d9bb0b3b3af5"} Dec 16 12:18:29 crc kubenswrapper[4805]: I1216 12:18:29.121063 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"defc1f03-2277-4472-9ca8-2cd1c904cbd2","Type":"ContainerStarted","Data":"9dbf85e83202890a18c9e95c25551296ad964416cf6380bc817cf6db929b1963"} Dec 16 12:18:29 crc kubenswrapper[4805]: I1216 12:18:29.548788 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:30 crc kubenswrapper[4805]: I1216 12:18:30.136247 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"defc1f03-2277-4472-9ca8-2cd1c904cbd2","Type":"ContainerStarted","Data":"187ee1bc45db7e45894126495f1c1149559d7cfc5f4dba1f636924439a091396"} Dec 16 12:18:31 crc kubenswrapper[4805]: E1216 12:18:31.482658 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:31 crc kubenswrapper[4805]: I1216 12:18:31.487752 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc66n" event={"ID":"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa","Type":"ContainerStarted","Data":"5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8"} Dec 16 12:18:31 crc kubenswrapper[4805]: E1216 12:18:31.490330 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:31 crc kubenswrapper[4805]: I1216 12:18:31.495986 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"defc1f03-2277-4472-9ca8-2cd1c904cbd2","Type":"ContainerStarted","Data":"824f87a71fb1d7ae66ee34f7957b96e353c5fa4ae1ea6335813124ea3ae2f7e5"} Dec 16 12:18:31 crc kubenswrapper[4805]: E1216 12:18:31.511901 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:31 crc kubenswrapper[4805]: E1216 12:18:31.512005 4805 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="0f19678d-cbc8-49f7-ad36-5b4886220730" containerName="nova-cell0-conductor-conductor" Dec 16 12:18:32 crc kubenswrapper[4805]: I1216 12:18:32.506873 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"defc1f03-2277-4472-9ca8-2cd1c904cbd2","Type":"ContainerStarted","Data":"585b2496f18f397c1578e65d267e82cdf95ba6cd764abd61f75d8d9142ca3008"} Dec 16 12:18:34 crc kubenswrapper[4805]: I1216 12:18:34.702443 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"defc1f03-2277-4472-9ca8-2cd1c904cbd2","Type":"ContainerStarted","Data":"1c6d0df57204b417c5c20a8102cf6f50f8d614efefe0310ed5d052744853a7b6"} Dec 16 12:18:34 crc kubenswrapper[4805]: I1216 12:18:34.702950 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="ceilometer-central-agent" containerID="cri-o://187ee1bc45db7e45894126495f1c1149559d7cfc5f4dba1f636924439a091396" gracePeriod=30 Dec 16 12:18:34 crc kubenswrapper[4805]: I1216 12:18:34.703066 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 16 12:18:34 crc kubenswrapper[4805]: I1216 12:18:34.703200 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="proxy-httpd" containerID="cri-o://1c6d0df57204b417c5c20a8102cf6f50f8d614efefe0310ed5d052744853a7b6" gracePeriod=30 Dec 16 12:18:34 crc kubenswrapper[4805]: I1216 12:18:34.703278 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="ceilometer-notification-agent" containerID="cri-o://824f87a71fb1d7ae66ee34f7957b96e353c5fa4ae1ea6335813124ea3ae2f7e5" gracePeriod=30 Dec 16 12:18:34 crc kubenswrapper[4805]: I1216 12:18:34.703315 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="sg-core" containerID="cri-o://585b2496f18f397c1578e65d267e82cdf95ba6cd764abd61f75d8d9142ca3008" gracePeriod=30 Dec 16 12:18:34 crc kubenswrapper[4805]: I1216 12:18:34.746286 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.663752521 podStartE2EDuration="7.746265579s" podCreationTimestamp="2025-12-16 12:18:27 +0000 UTC" firstStartedPulling="2025-12-16 12:18:28.675666673 +0000 UTC m=+1382.393924478" lastFinishedPulling="2025-12-16 12:18:33.758179731 +0000 UTC m=+1387.476437536" observedRunningTime="2025-12-16 12:18:34.743640253 +0000 UTC m=+1388.461898078" watchObservedRunningTime="2025-12-16 12:18:34.746265579 +0000 UTC m=+1388.464523394" Dec 16 12:18:35 crc kubenswrapper[4805]: E1216 12:18:35.082827 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddefc1f03_2277_4472_9ca8_2cd1c904cbd2.slice/crio-1c6d0df57204b417c5c20a8102cf6f50f8d614efefe0310ed5d052744853a7b6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddefc1f03_2277_4472_9ca8_2cd1c904cbd2.slice/crio-conmon-1c6d0df57204b417c5c20a8102cf6f50f8d614efefe0310ed5d052744853a7b6.scope\": RecentStats: unable to find data in memory cache]" Dec 16 12:18:35 crc kubenswrapper[4805]: I1216 12:18:35.912572 4805 generic.go:334] "Generic (PLEG): container finished" podID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerID="1c6d0df57204b417c5c20a8102cf6f50f8d614efefe0310ed5d052744853a7b6" exitCode=0 Dec 16 12:18:35 crc kubenswrapper[4805]: I1216 12:18:35.912607 4805 generic.go:334] "Generic (PLEG): container finished" podID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerID="585b2496f18f397c1578e65d267e82cdf95ba6cd764abd61f75d8d9142ca3008" exitCode=2 Dec 16 12:18:35 crc kubenswrapper[4805]: I1216 12:18:35.912616 4805 generic.go:334] "Generic (PLEG): container finished" podID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerID="824f87a71fb1d7ae66ee34f7957b96e353c5fa4ae1ea6335813124ea3ae2f7e5" exitCode=0 Dec 16 12:18:35 crc kubenswrapper[4805]: I1216 12:18:35.912635 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"defc1f03-2277-4472-9ca8-2cd1c904cbd2","Type":"ContainerDied","Data":"1c6d0df57204b417c5c20a8102cf6f50f8d614efefe0310ed5d052744853a7b6"} Dec 16 12:18:35 crc kubenswrapper[4805]: I1216 12:18:35.912658 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"defc1f03-2277-4472-9ca8-2cd1c904cbd2","Type":"ContainerDied","Data":"585b2496f18f397c1578e65d267e82cdf95ba6cd764abd61f75d8d9142ca3008"} Dec 16 12:18:35 crc kubenswrapper[4805]: I1216 12:18:35.912667 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"defc1f03-2277-4472-9ca8-2cd1c904cbd2","Type":"ContainerDied","Data":"824f87a71fb1d7ae66ee34f7957b96e353c5fa4ae1ea6335813124ea3ae2f7e5"} Dec 16 12:18:36 crc kubenswrapper[4805]: E1216 12:18:36.478743 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:36 crc kubenswrapper[4805]: E1216 12:18:36.480659 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:36 crc kubenswrapper[4805]: E1216 12:18:36.482272 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:36 crc kubenswrapper[4805]: E1216 12:18:36.482305 4805 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="0f19678d-cbc8-49f7-ad36-5b4886220730" containerName="nova-cell0-conductor-conductor" Dec 16 12:18:36 crc kubenswrapper[4805]: I1216 12:18:36.923569 4805 generic.go:334] "Generic (PLEG): container finished" podID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerID="5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8" exitCode=0 Dec 16 12:18:36 crc kubenswrapper[4805]: I1216 12:18:36.923613 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc66n" event={"ID":"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa","Type":"ContainerDied","Data":"5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8"} Dec 16 12:18:37 crc kubenswrapper[4805]: I1216 12:18:37.937541 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc66n" event={"ID":"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa","Type":"ContainerStarted","Data":"064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2"} Dec 16 12:18:37 crc kubenswrapper[4805]: I1216 12:18:37.959752 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dc66n" podStartSLOduration=2.714028351 podStartE2EDuration="10.959733824s" podCreationTimestamp="2025-12-16 12:18:27 +0000 UTC" firstStartedPulling="2025-12-16 12:18:29.120340731 +0000 UTC m=+1382.838598536" lastFinishedPulling="2025-12-16 12:18:37.366046204 +0000 UTC m=+1391.084304009" observedRunningTime="2025-12-16 12:18:37.956315476 +0000 UTC m=+1391.674573291" watchObservedRunningTime="2025-12-16 12:18:37.959733824 +0000 UTC m=+1391.677991639" Dec 16 12:18:38 crc kubenswrapper[4805]: I1216 12:18:38.956480 4805 generic.go:334] "Generic (PLEG): container finished" podID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerID="187ee1bc45db7e45894126495f1c1149559d7cfc5f4dba1f636924439a091396" exitCode=0 Dec 16 12:18:38 crc kubenswrapper[4805]: I1216 12:18:38.956539 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"defc1f03-2277-4472-9ca8-2cd1c904cbd2","Type":"ContainerDied","Data":"187ee1bc45db7e45894126495f1c1149559d7cfc5f4dba1f636924439a091396"} Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.331250 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.422778 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-scripts\") pod \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.422901 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-combined-ca-bundle\") pod \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.422951 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-run-httpd\") pod \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.423004 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-log-httpd\") pod \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.423171 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-config-data\") pod \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.423194 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-sg-core-conf-yaml\") pod \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.423236 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrx6b\" (UniqueName: \"kubernetes.io/projected/defc1f03-2277-4472-9ca8-2cd1c904cbd2-kube-api-access-lrx6b\") pod \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\" (UID: \"defc1f03-2277-4472-9ca8-2cd1c904cbd2\") " Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.424780 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "defc1f03-2277-4472-9ca8-2cd1c904cbd2" (UID: "defc1f03-2277-4472-9ca8-2cd1c904cbd2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.428494 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "defc1f03-2277-4472-9ca8-2cd1c904cbd2" (UID: "defc1f03-2277-4472-9ca8-2cd1c904cbd2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.453648 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-scripts" (OuterVolumeSpecName: "scripts") pod "defc1f03-2277-4472-9ca8-2cd1c904cbd2" (UID: "defc1f03-2277-4472-9ca8-2cd1c904cbd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.479401 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defc1f03-2277-4472-9ca8-2cd1c904cbd2-kube-api-access-lrx6b" (OuterVolumeSpecName: "kube-api-access-lrx6b") pod "defc1f03-2277-4472-9ca8-2cd1c904cbd2" (UID: "defc1f03-2277-4472-9ca8-2cd1c904cbd2"). InnerVolumeSpecName "kube-api-access-lrx6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.542357 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrx6b\" (UniqueName: \"kubernetes.io/projected/defc1f03-2277-4472-9ca8-2cd1c904cbd2-kube-api-access-lrx6b\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.542402 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.542415 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.542426 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/defc1f03-2277-4472-9ca8-2cd1c904cbd2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.584697 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "defc1f03-2277-4472-9ca8-2cd1c904cbd2" (UID: "defc1f03-2277-4472-9ca8-2cd1c904cbd2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.644301 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.705396 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "defc1f03-2277-4472-9ca8-2cd1c904cbd2" (UID: "defc1f03-2277-4472-9ca8-2cd1c904cbd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.717210 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-config-data" (OuterVolumeSpecName: "config-data") pod "defc1f03-2277-4472-9ca8-2cd1c904cbd2" (UID: "defc1f03-2277-4472-9ca8-2cd1c904cbd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.750567 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.751835 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defc1f03-2277-4472-9ca8-2cd1c904cbd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.968338 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"defc1f03-2277-4472-9ca8-2cd1c904cbd2","Type":"ContainerDied","Data":"9dbf85e83202890a18c9e95c25551296ad964416cf6380bc817cf6db929b1963"} Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.969374 4805 scope.go:117] "RemoveContainer" containerID="1c6d0df57204b417c5c20a8102cf6f50f8d614efefe0310ed5d052744853a7b6" Dec 16 12:18:39 crc kubenswrapper[4805]: I1216 12:18:39.968602 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.000007 4805 scope.go:117] "RemoveContainer" containerID="585b2496f18f397c1578e65d267e82cdf95ba6cd764abd61f75d8d9142ca3008" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.014943 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.038267 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.045970 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:40 crc kubenswrapper[4805]: E1216 12:18:40.046632 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="sg-core" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.046702 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="sg-core" Dec 16 12:18:40 crc kubenswrapper[4805]: E1216 12:18:40.046771 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="ceilometer-central-agent" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.046837 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="ceilometer-central-agent" Dec 16 12:18:40 crc kubenswrapper[4805]: E1216 12:18:40.046906 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="ceilometer-notification-agent" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.047006 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="ceilometer-notification-agent" Dec 16 12:18:40 crc kubenswrapper[4805]: E1216 12:18:40.047084 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="proxy-httpd" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.047160 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="proxy-httpd" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.047394 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="ceilometer-notification-agent" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.047471 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="sg-core" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.047530 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="proxy-httpd" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.047587 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" containerName="ceilometer-central-agent" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.049367 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.055710 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.056131 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.076433 4805 scope.go:117] "RemoveContainer" containerID="824f87a71fb1d7ae66ee34f7957b96e353c5fa4ae1ea6335813124ea3ae2f7e5" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.083911 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.113958 4805 scope.go:117] "RemoveContainer" containerID="187ee1bc45db7e45894126495f1c1149559d7cfc5f4dba1f636924439a091396" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.159060 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqw4k\" (UniqueName: \"kubernetes.io/projected/0301416f-6de4-41bd-86e5-c6861d64ee07-kube-api-access-rqw4k\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.159125 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-scripts\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.159183 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.159203 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-config-data\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.159225 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-log-httpd\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.159255 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.159305 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-run-httpd\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.261398 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-scripts\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.261708 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.261818 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-config-data\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.262374 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-log-httpd\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.262914 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.263060 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-run-httpd\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.263281 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqw4k\" (UniqueName: \"kubernetes.io/projected/0301416f-6de4-41bd-86e5-c6861d64ee07-kube-api-access-rqw4k\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.262817 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-log-httpd\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.264323 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-run-httpd\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.273274 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-scripts\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.274089 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.283423 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.284720 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqw4k\" (UniqueName: \"kubernetes.io/projected/0301416f-6de4-41bd-86e5-c6861d64ee07-kube-api-access-rqw4k\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.285913 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-config-data\") pod \"ceilometer-0\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.378209 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.540979 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="defc1f03-2277-4472-9ca8-2cd1c904cbd2" path="/var/lib/kubelet/pods/defc1f03-2277-4472-9ca8-2cd1c904cbd2/volumes" Dec 16 12:18:40 crc kubenswrapper[4805]: I1216 12:18:40.930070 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:18:41 crc kubenswrapper[4805]: I1216 12:18:41.025941 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0301416f-6de4-41bd-86e5-c6861d64ee07","Type":"ContainerStarted","Data":"f1aa0f00024596a535bf1bebba63caffa469bc7f48493d5ec83e2b3c3dd524c7"} Dec 16 12:18:41 crc kubenswrapper[4805]: E1216 12:18:41.477909 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:41 crc kubenswrapper[4805]: E1216 12:18:41.479826 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:41 crc kubenswrapper[4805]: E1216 12:18:41.481047 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:41 crc kubenswrapper[4805]: E1216 12:18:41.481089 4805 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="0f19678d-cbc8-49f7-ad36-5b4886220730" containerName="nova-cell0-conductor-conductor" Dec 16 12:18:44 crc kubenswrapper[4805]: I1216 12:18:44.061410 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0301416f-6de4-41bd-86e5-c6861d64ee07","Type":"ContainerStarted","Data":"1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05"} Dec 16 12:18:44 crc kubenswrapper[4805]: I1216 12:18:44.061936 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0301416f-6de4-41bd-86e5-c6861d64ee07","Type":"ContainerStarted","Data":"66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3"} Dec 16 12:18:45 crc kubenswrapper[4805]: I1216 12:18:45.072215 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0301416f-6de4-41bd-86e5-c6861d64ee07","Type":"ContainerStarted","Data":"e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3"} Dec 16 12:18:46 crc kubenswrapper[4805]: E1216 12:18:46.480476 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:46 crc kubenswrapper[4805]: E1216 12:18:46.481680 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:46 crc kubenswrapper[4805]: E1216 12:18:46.483932 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:46 crc kubenswrapper[4805]: E1216 12:18:46.483982 4805 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="0f19678d-cbc8-49f7-ad36-5b4886220730" containerName="nova-cell0-conductor-conductor" Dec 16 12:18:47 crc kubenswrapper[4805]: I1216 12:18:47.101599 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0301416f-6de4-41bd-86e5-c6861d64ee07","Type":"ContainerStarted","Data":"d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e"} Dec 16 12:18:47 crc kubenswrapper[4805]: I1216 12:18:47.102986 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 16 12:18:47 crc kubenswrapper[4805]: I1216 12:18:47.125996 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.932662763 podStartE2EDuration="7.125979029s" podCreationTimestamp="2025-12-16 12:18:40 +0000 UTC" firstStartedPulling="2025-12-16 12:18:40.972949649 +0000 UTC m=+1394.691207444" lastFinishedPulling="2025-12-16 12:18:46.166265905 +0000 UTC m=+1399.884523710" observedRunningTime="2025-12-16 12:18:47.122323854 +0000 UTC m=+1400.840581659" watchObservedRunningTime="2025-12-16 12:18:47.125979029 +0000 UTC m=+1400.844236844" Dec 16 12:18:47 crc kubenswrapper[4805]: I1216 12:18:47.736418 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:47 crc kubenswrapper[4805]: I1216 12:18:47.736504 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:48 crc kubenswrapper[4805]: I1216 12:18:48.794336 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dc66n" podUID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerName="registry-server" probeResult="failure" output=< Dec 16 12:18:48 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 12:18:48 crc kubenswrapper[4805]: > Dec 16 12:18:51 crc kubenswrapper[4805]: E1216 12:18:51.478200 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:51 crc kubenswrapper[4805]: E1216 12:18:51.480473 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:51 crc kubenswrapper[4805]: E1216 12:18:51.482018 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 16 12:18:51 crc kubenswrapper[4805]: E1216 12:18:51.482075 4805 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="0f19678d-cbc8-49f7-ad36-5b4886220730" containerName="nova-cell0-conductor-conductor" Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.174114 4805 generic.go:334] "Generic (PLEG): container finished" podID="0f19678d-cbc8-49f7-ad36-5b4886220730" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" exitCode=137 Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.174193 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0f19678d-cbc8-49f7-ad36-5b4886220730","Type":"ContainerDied","Data":"7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e"} Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.539298 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.639484 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-combined-ca-bundle\") pod \"0f19678d-cbc8-49f7-ad36-5b4886220730\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.639679 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4qbs\" (UniqueName: \"kubernetes.io/projected/0f19678d-cbc8-49f7-ad36-5b4886220730-kube-api-access-j4qbs\") pod \"0f19678d-cbc8-49f7-ad36-5b4886220730\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.639824 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-config-data\") pod \"0f19678d-cbc8-49f7-ad36-5b4886220730\" (UID: \"0f19678d-cbc8-49f7-ad36-5b4886220730\") " Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.647538 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f19678d-cbc8-49f7-ad36-5b4886220730-kube-api-access-j4qbs" (OuterVolumeSpecName: "kube-api-access-j4qbs") pod "0f19678d-cbc8-49f7-ad36-5b4886220730" (UID: "0f19678d-cbc8-49f7-ad36-5b4886220730"). InnerVolumeSpecName "kube-api-access-j4qbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.669007 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-config-data" (OuterVolumeSpecName: "config-data") pod "0f19678d-cbc8-49f7-ad36-5b4886220730" (UID: "0f19678d-cbc8-49f7-ad36-5b4886220730"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.673485 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f19678d-cbc8-49f7-ad36-5b4886220730" (UID: "0f19678d-cbc8-49f7-ad36-5b4886220730"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.751876 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.751926 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f19678d-cbc8-49f7-ad36-5b4886220730-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:53 crc kubenswrapper[4805]: I1216 12:18:53.751938 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4qbs\" (UniqueName: \"kubernetes.io/projected/0f19678d-cbc8-49f7-ad36-5b4886220730-kube-api-access-j4qbs\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.186424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0f19678d-cbc8-49f7-ad36-5b4886220730","Type":"ContainerDied","Data":"c4a8a2c8426f4af981f62fdba66f1d4c9d2f2e8995d0b65efaeaa7c7618c8b9c"} Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.186485 4805 scope.go:117] "RemoveContainer" containerID="7208336b2f2b57831324960c283bfaea2cc83b4e62ca7e00a8563a56c9b32e6e" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.186645 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.226033 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.239513 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.247540 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 16 12:18:54 crc kubenswrapper[4805]: E1216 12:18:54.248113 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f19678d-cbc8-49f7-ad36-5b4886220730" containerName="nova-cell0-conductor-conductor" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.248153 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f19678d-cbc8-49f7-ad36-5b4886220730" containerName="nova-cell0-conductor-conductor" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.248441 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f19678d-cbc8-49f7-ad36-5b4886220730" containerName="nova-cell0-conductor-conductor" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.249374 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.256388 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.289799 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xdpdc" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.289816 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.392629 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be1f6a3-3b42-4981-86ff-8851e931cd97-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7be1f6a3-3b42-4981-86ff-8851e931cd97\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.392705 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb6lh\" (UniqueName: \"kubernetes.io/projected/7be1f6a3-3b42-4981-86ff-8851e931cd97-kube-api-access-bb6lh\") pod \"nova-cell0-conductor-0\" (UID: \"7be1f6a3-3b42-4981-86ff-8851e931cd97\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.392802 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be1f6a3-3b42-4981-86ff-8851e931cd97-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7be1f6a3-3b42-4981-86ff-8851e931cd97\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.494234 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be1f6a3-3b42-4981-86ff-8851e931cd97-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7be1f6a3-3b42-4981-86ff-8851e931cd97\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.494307 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb6lh\" (UniqueName: \"kubernetes.io/projected/7be1f6a3-3b42-4981-86ff-8851e931cd97-kube-api-access-bb6lh\") pod \"nova-cell0-conductor-0\" (UID: \"7be1f6a3-3b42-4981-86ff-8851e931cd97\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.494383 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be1f6a3-3b42-4981-86ff-8851e931cd97-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7be1f6a3-3b42-4981-86ff-8851e931cd97\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.500039 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be1f6a3-3b42-4981-86ff-8851e931cd97-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7be1f6a3-3b42-4981-86ff-8851e931cd97\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.505842 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be1f6a3-3b42-4981-86ff-8851e931cd97-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7be1f6a3-3b42-4981-86ff-8851e931cd97\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.516057 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb6lh\" (UniqueName: \"kubernetes.io/projected/7be1f6a3-3b42-4981-86ff-8851e931cd97-kube-api-access-bb6lh\") pod \"nova-cell0-conductor-0\" (UID: \"7be1f6a3-3b42-4981-86ff-8851e931cd97\") " pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.532478 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f19678d-cbc8-49f7-ad36-5b4886220730" path="/var/lib/kubelet/pods/0f19678d-cbc8-49f7-ad36-5b4886220730/volumes" Dec 16 12:18:54 crc kubenswrapper[4805]: I1216 12:18:54.614668 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:55 crc kubenswrapper[4805]: I1216 12:18:55.095902 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 16 12:18:55 crc kubenswrapper[4805]: I1216 12:18:55.200213 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7be1f6a3-3b42-4981-86ff-8851e931cd97","Type":"ContainerStarted","Data":"9cc8dc6924d05493d6094028d930888a77e29cd159956ff4904453699335001d"} Dec 16 12:18:56 crc kubenswrapper[4805]: I1216 12:18:56.212599 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7be1f6a3-3b42-4981-86ff-8851e931cd97","Type":"ContainerStarted","Data":"90fe2721ccb3e399f12a52f241a8cf1a5ae662d52155a213fb285e18c401c626"} Dec 16 12:18:56 crc kubenswrapper[4805]: I1216 12:18:56.212827 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 16 12:18:56 crc kubenswrapper[4805]: I1216 12:18:56.239852 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.23983219 podStartE2EDuration="2.23983219s" podCreationTimestamp="2025-12-16 12:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:18:56.233472827 +0000 UTC m=+1409.951730632" watchObservedRunningTime="2025-12-16 12:18:56.23983219 +0000 UTC m=+1409.958090025" Dec 16 12:18:57 crc kubenswrapper[4805]: I1216 12:18:57.071782 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:18:57 crc kubenswrapper[4805]: I1216 12:18:57.071848 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:18:57 crc kubenswrapper[4805]: I1216 12:18:57.071894 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:18:57 crc kubenswrapper[4805]: I1216 12:18:57.072613 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"52cf8f6f2f746633bfd1f446a1bda7ea7f3c784cb6d567e07a1cc669ed487201"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:18:57 crc kubenswrapper[4805]: I1216 12:18:57.072676 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://52cf8f6f2f746633bfd1f446a1bda7ea7f3c784cb6d567e07a1cc669ed487201" gracePeriod=600 Dec 16 12:18:57 crc kubenswrapper[4805]: I1216 12:18:57.229768 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="52cf8f6f2f746633bfd1f446a1bda7ea7f3c784cb6d567e07a1cc669ed487201" exitCode=0 Dec 16 12:18:57 crc kubenswrapper[4805]: I1216 12:18:57.229823 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"52cf8f6f2f746633bfd1f446a1bda7ea7f3c784cb6d567e07a1cc669ed487201"} Dec 16 12:18:57 crc kubenswrapper[4805]: I1216 12:18:57.230159 4805 scope.go:117] "RemoveContainer" containerID="49407ccdd3d008f1744b80cf9c050b56468ce45acde47e6aab9b00525b75e878" Dec 16 12:18:57 crc kubenswrapper[4805]: I1216 12:18:57.788364 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:57 crc kubenswrapper[4805]: I1216 12:18:57.835870 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:58 crc kubenswrapper[4805]: I1216 12:18:58.240599 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308"} Dec 16 12:18:58 crc kubenswrapper[4805]: I1216 12:18:58.542342 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dc66n"] Dec 16 12:18:59 crc kubenswrapper[4805]: I1216 12:18:59.249518 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dc66n" podUID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerName="registry-server" containerID="cri-o://064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2" gracePeriod=2 Dec 16 12:18:59 crc kubenswrapper[4805]: I1216 12:18:59.710479 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:18:59 crc kubenswrapper[4805]: I1216 12:18:59.819173 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-catalog-content\") pod \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " Dec 16 12:18:59 crc kubenswrapper[4805]: I1216 12:18:59.819430 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc276\" (UniqueName: \"kubernetes.io/projected/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-kube-api-access-sc276\") pod \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " Dec 16 12:18:59 crc kubenswrapper[4805]: I1216 12:18:59.819477 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-utilities\") pod \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\" (UID: \"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa\") " Dec 16 12:18:59 crc kubenswrapper[4805]: I1216 12:18:59.820458 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-utilities" (OuterVolumeSpecName: "utilities") pod "ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" (UID: "ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:18:59 crc kubenswrapper[4805]: I1216 12:18:59.826768 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-kube-api-access-sc276" (OuterVolumeSpecName: "kube-api-access-sc276") pod "ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" (UID: "ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa"). InnerVolumeSpecName "kube-api-access-sc276". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:18:59 crc kubenswrapper[4805]: I1216 12:18:59.921337 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc276\" (UniqueName: \"kubernetes.io/projected/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-kube-api-access-sc276\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:59 crc kubenswrapper[4805]: I1216 12:18:59.921364 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:18:59 crc kubenswrapper[4805]: I1216 12:18:59.954296 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" (UID: "ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.022917 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.263033 4805 generic.go:334] "Generic (PLEG): container finished" podID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerID="064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2" exitCode=0 Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.263076 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc66n" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.263093 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc66n" event={"ID":"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa","Type":"ContainerDied","Data":"064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2"} Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.266762 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc66n" event={"ID":"ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa","Type":"ContainerDied","Data":"a3841dd1d9c2192f95aa6163e918757f2799488d18dfc988d585d9bb0b3b3af5"} Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.266788 4805 scope.go:117] "RemoveContainer" containerID="064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.302475 4805 scope.go:117] "RemoveContainer" containerID="5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.308752 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dc66n"] Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.319214 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dc66n"] Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.334480 4805 scope.go:117] "RemoveContainer" containerID="733ad8f5c765ea8c3149206f04c8ed874b979fc8efe82b3315d457585de4bd42" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.369842 4805 scope.go:117] "RemoveContainer" containerID="064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2" Dec 16 12:19:00 crc kubenswrapper[4805]: E1216 12:19:00.370338 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2\": container with ID starting with 064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2 not found: ID does not exist" containerID="064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.370372 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2"} err="failed to get container status \"064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2\": rpc error: code = NotFound desc = could not find container \"064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2\": container with ID starting with 064cbed49d69b11d4bdcaf1a9481f17a0768b2cc15c942fa091d983640f0b7a2 not found: ID does not exist" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.370394 4805 scope.go:117] "RemoveContainer" containerID="5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8" Dec 16 12:19:00 crc kubenswrapper[4805]: E1216 12:19:00.370609 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8\": container with ID starting with 5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8 not found: ID does not exist" containerID="5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.370631 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8"} err="failed to get container status \"5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8\": rpc error: code = NotFound desc = could not find container \"5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8\": container with ID starting with 5c061e33bcb8ba716eaf7067035e1524a63e6269d4d519b8ebe00c7be5e12ac8 not found: ID does not exist" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.370647 4805 scope.go:117] "RemoveContainer" containerID="733ad8f5c765ea8c3149206f04c8ed874b979fc8efe82b3315d457585de4bd42" Dec 16 12:19:00 crc kubenswrapper[4805]: E1216 12:19:00.370822 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"733ad8f5c765ea8c3149206f04c8ed874b979fc8efe82b3315d457585de4bd42\": container with ID starting with 733ad8f5c765ea8c3149206f04c8ed874b979fc8efe82b3315d457585de4bd42 not found: ID does not exist" containerID="733ad8f5c765ea8c3149206f04c8ed874b979fc8efe82b3315d457585de4bd42" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.370842 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733ad8f5c765ea8c3149206f04c8ed874b979fc8efe82b3315d457585de4bd42"} err="failed to get container status \"733ad8f5c765ea8c3149206f04c8ed874b979fc8efe82b3315d457585de4bd42\": rpc error: code = NotFound desc = could not find container \"733ad8f5c765ea8c3149206f04c8ed874b979fc8efe82b3315d457585de4bd42\": container with ID starting with 733ad8f5c765ea8c3149206f04c8ed874b979fc8efe82b3315d457585de4bd42 not found: ID does not exist" Dec 16 12:19:00 crc kubenswrapper[4805]: I1216 12:19:00.533007 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" path="/var/lib/kubelet/pods/ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa/volumes" Dec 16 12:19:04 crc kubenswrapper[4805]: I1216 12:19:04.642405 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.275923 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-c26fd"] Dec 16 12:19:05 crc kubenswrapper[4805]: E1216 12:19:05.276365 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerName="registry-server" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.276380 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerName="registry-server" Dec 16 12:19:05 crc kubenswrapper[4805]: E1216 12:19:05.276398 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerName="extract-content" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.276404 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerName="extract-content" Dec 16 12:19:05 crc kubenswrapper[4805]: E1216 12:19:05.276418 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerName="extract-utilities" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.276428 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerName="extract-utilities" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.276624 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad4ae5ca-5b11-41bc-bf35-f7698d1d8ffa" containerName="registry-server" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.278281 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.281931 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.288813 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.327003 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c26fd"] Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.437228 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-scripts\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.438404 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsn7\" (UniqueName: \"kubernetes.io/projected/73a7a5f1-017b-4be6-b498-9c9d7416fada-kube-api-access-wmsn7\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.438561 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-config-data\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.438734 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.503302 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.505107 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.508441 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.531261 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.541455 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-scripts\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.541541 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsn7\" (UniqueName: \"kubernetes.io/projected/73a7a5f1-017b-4be6-b498-9c9d7416fada-kube-api-access-wmsn7\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.541568 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-config-data\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.541618 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.552832 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-scripts\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.553391 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-config-data\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.553885 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.572619 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsn7\" (UniqueName: \"kubernetes.io/projected/73a7a5f1-017b-4be6-b498-9c9d7416fada-kube-api-access-wmsn7\") pod \"nova-cell0-cell-mapping-c26fd\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.625013 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.626847 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.630635 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.643112 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0360fb3e-6835-48be-b3a9-8d56756d649d-logs\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.650496 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.650626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqzw5\" (UniqueName: \"kubernetes.io/projected/0360fb3e-6835-48be-b3a9-8d56756d649d-kube-api-access-rqzw5\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.650771 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-config-data\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.668831 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.687037 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.713595 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.727727 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.755585 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0360fb3e-6835-48be-b3a9-8d56756d649d-logs\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.755892 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.756017 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-config-data\") pod \"nova-scheduler-0\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.756133 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqzw5\" (UniqueName: \"kubernetes.io/projected/0360fb3e-6835-48be-b3a9-8d56756d649d-kube-api-access-rqzw5\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.756299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-config-data\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.756477 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.756611 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2wzk\" (UniqueName: \"kubernetes.io/projected/98dd959b-802c-4ea8-8876-a9a56e7fbafe-kube-api-access-b2wzk\") pod \"nova-scheduler-0\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.758938 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0360fb3e-6835-48be-b3a9-8d56756d649d-logs\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.771322 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.800068 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.800904 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqzw5\" (UniqueName: \"kubernetes.io/projected/0360fb3e-6835-48be-b3a9-8d56756d649d-kube-api-access-rqzw5\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.807052 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-config-data\") pod \"nova-api-0\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.807882 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.837100 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.858118 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.858181 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-config-data\") pod \"nova-scheduler-0\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.858257 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6jpz\" (UniqueName: \"kubernetes.io/projected/062ffbac-0f68-408c-ab5b-b5a986e2f333-kube-api-access-s6jpz\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.858297 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.858320 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-config-data\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.858342 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2wzk\" (UniqueName: \"kubernetes.io/projected/98dd959b-802c-4ea8-8876-a9a56e7fbafe-kube-api-access-b2wzk\") pod \"nova-scheduler-0\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.858423 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/062ffbac-0f68-408c-ab5b-b5a986e2f333-logs\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.874058 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-config-data\") pod \"nova-scheduler-0\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.892446 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.911392 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2wzk\" (UniqueName: \"kubernetes.io/projected/98dd959b-802c-4ea8-8876-a9a56e7fbafe-kube-api-access-b2wzk\") pod \"nova-scheduler-0\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.933224 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.934818 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.949101 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.971574 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.972326 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6jpz\" (UniqueName: \"kubernetes.io/projected/062ffbac-0f68-408c-ab5b-b5a986e2f333-kube-api-access-s6jpz\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.972412 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-config-data\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.972664 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/062ffbac-0f68-408c-ab5b-b5a986e2f333-logs\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.973169 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/062ffbac-0f68-408c-ab5b-b5a986e2f333-logs\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:05 crc kubenswrapper[4805]: I1216 12:19:05.979742 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.026063 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6jpz\" (UniqueName: \"kubernetes.io/projected/062ffbac-0f68-408c-ab5b-b5a986e2f333-kube-api-access-s6jpz\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.059434 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.059954 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-config-data\") pod \"nova-metadata-0\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " pod="openstack/nova-metadata-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.072279 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gdbhm"] Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.073905 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.074987 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltkxr\" (UniqueName: \"kubernetes.io/projected/78d52648-9036-4507-8f7a-0b1c7b2e51fc-kube-api-access-ltkxr\") pod \"nova-cell1-novncproxy-0\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.075074 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.075128 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.123612 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gdbhm"] Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.156856 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.167837 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.184312 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.184423 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.184467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.184493 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.184540 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw47q\" (UniqueName: \"kubernetes.io/projected/6f69017e-0c0e-403d-9ec9-685b8b565878-kube-api-access-vw47q\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.184620 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-config\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.184642 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltkxr\" (UniqueName: \"kubernetes.io/projected/78d52648-9036-4507-8f7a-0b1c7b2e51fc-kube-api-access-ltkxr\") pod \"nova-cell1-novncproxy-0\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.184667 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.184686 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.189789 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.208035 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.227639 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltkxr\" (UniqueName: \"kubernetes.io/projected/78d52648-9036-4507-8f7a-0b1c7b2e51fc-kube-api-access-ltkxr\") pod \"nova-cell1-novncproxy-0\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.282524 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.291919 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.291983 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.292072 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw47q\" (UniqueName: \"kubernetes.io/projected/6f69017e-0c0e-403d-9ec9-685b8b565878-kube-api-access-vw47q\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.292290 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-config\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.292325 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.292341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.293456 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.293844 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-config\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.294082 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.294122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.294459 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.315934 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw47q\" (UniqueName: \"kubernetes.io/projected/6f69017e-0c0e-403d-9ec9-685b8b565878-kube-api-access-vw47q\") pod \"dnsmasq-dns-845d6d6f59-gdbhm\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.497817 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.560475 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c26fd"] Dec 16 12:19:06 crc kubenswrapper[4805]: I1216 12:19:06.920471 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.343115 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.365839 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.374949 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:07 crc kubenswrapper[4805]: W1216 12:19:07.384753 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98dd959b_802c_4ea8_8876_a9a56e7fbafe.slice/crio-c422f2212c22176ad6a9050c8b88e458372c81dd5e0fee195fc3f93ef8bb0865 WatchSource:0}: Error finding container c422f2212c22176ad6a9050c8b88e458372c81dd5e0fee195fc3f93ef8bb0865: Status 404 returned error can't find the container with id c422f2212c22176ad6a9050c8b88e458372c81dd5e0fee195fc3f93ef8bb0865 Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.429970 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gdbhm"] Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.654602 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8n694"] Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.655945 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.667152 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98dd959b-802c-4ea8-8876-a9a56e7fbafe","Type":"ContainerStarted","Data":"c422f2212c22176ad6a9050c8b88e458372c81dd5e0fee195fc3f93ef8bb0865"} Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.689677 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" event={"ID":"6f69017e-0c0e-403d-9ec9-685b8b565878","Type":"ContainerStarted","Data":"4b7b1baa44cd4160f05c3c6bfc9b525d47505ca128d037b466943f4cf5482343"} Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.694452 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.694668 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.700652 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c26fd" event={"ID":"73a7a5f1-017b-4be6-b498-9c9d7416fada","Type":"ContainerStarted","Data":"63ecef3c06292f6c8645c7058de4184a348d677eae434d60d79d38f100662248"} Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.700693 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c26fd" event={"ID":"73a7a5f1-017b-4be6-b498-9c9d7416fada","Type":"ContainerStarted","Data":"e0c6bb0f781f2dd15665b698b4266101648ca80ef52e6e4ea0b8820f506bc22c"} Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.708214 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8n694"] Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.717418 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"062ffbac-0f68-408c-ab5b-b5a986e2f333","Type":"ContainerStarted","Data":"86bed1c7bdaa05284ae57dad6f33c5e2f5410a1fb4c5bc6677919fccf0878628"} Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.731292 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78d52648-9036-4507-8f7a-0b1c7b2e51fc","Type":"ContainerStarted","Data":"e43d47c66b7652fe69bfc1f684744307f9d620c0d13de85becab4a153738663e"} Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.733378 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0360fb3e-6835-48be-b3a9-8d56756d649d","Type":"ContainerStarted","Data":"557da7a8151ca12b0ff921b39845c38e971eae393375d1f3a1d4ca6cf97cf867"} Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.778552 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-c26fd" podStartSLOduration=2.7785341199999998 podStartE2EDuration="2.77853412s" podCreationTimestamp="2025-12-16 12:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:07.748966332 +0000 UTC m=+1421.467224137" watchObservedRunningTime="2025-12-16 12:19:07.77853412 +0000 UTC m=+1421.496791945" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.851555 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-config-data\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.851617 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25svp\" (UniqueName: \"kubernetes.io/projected/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-kube-api-access-25svp\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.851673 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-scripts\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.851723 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.953617 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.953839 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-config-data\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.953863 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25svp\" (UniqueName: \"kubernetes.io/projected/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-kube-api-access-25svp\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.953926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-scripts\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.961309 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-config-data\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.962056 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-scripts\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.970255 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:07 crc kubenswrapper[4805]: I1216 12:19:07.982164 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25svp\" (UniqueName: \"kubernetes.io/projected/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-kube-api-access-25svp\") pod \"nova-cell1-conductor-db-sync-8n694\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:08 crc kubenswrapper[4805]: I1216 12:19:08.071837 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:08 crc kubenswrapper[4805]: I1216 12:19:08.611505 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8n694"] Dec 16 12:19:08 crc kubenswrapper[4805]: I1216 12:19:08.754492 4805 generic.go:334] "Generic (PLEG): container finished" podID="6f69017e-0c0e-403d-9ec9-685b8b565878" containerID="1fa6ac65aea2d3d8b6db6d8d69f2330e983f0c6c685ea35225165a8353004df6" exitCode=0 Dec 16 12:19:08 crc kubenswrapper[4805]: I1216 12:19:08.754554 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" event={"ID":"6f69017e-0c0e-403d-9ec9-685b8b565878","Type":"ContainerDied","Data":"1fa6ac65aea2d3d8b6db6d8d69f2330e983f0c6c685ea35225165a8353004df6"} Dec 16 12:19:08 crc kubenswrapper[4805]: I1216 12:19:08.781758 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8n694" event={"ID":"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f","Type":"ContainerStarted","Data":"f2a26c0a5e4e11091b1d90f40b56d5241e66ed7ee27135de09764571ba0b6dc2"} Dec 16 12:19:09 crc kubenswrapper[4805]: I1216 12:19:09.810213 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" event={"ID":"6f69017e-0c0e-403d-9ec9-685b8b565878","Type":"ContainerStarted","Data":"3309d1ec37e1429a474629b549e46332acc4572d27ef99b3338c21799ac377ce"} Dec 16 12:19:09 crc kubenswrapper[4805]: I1216 12:19:09.810980 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:09 crc kubenswrapper[4805]: I1216 12:19:09.816729 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8n694" event={"ID":"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f","Type":"ContainerStarted","Data":"f39f20c711d9cf1226294beb703afab43635a50ca800b65894720531a4a46750"} Dec 16 12:19:09 crc kubenswrapper[4805]: I1216 12:19:09.837166 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" podStartSLOduration=4.837127856 podStartE2EDuration="4.837127856s" podCreationTimestamp="2025-12-16 12:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:09.830195578 +0000 UTC m=+1423.548453393" watchObservedRunningTime="2025-12-16 12:19:09.837127856 +0000 UTC m=+1423.555385671" Dec 16 12:19:09 crc kubenswrapper[4805]: I1216 12:19:09.855383 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8n694" podStartSLOduration=2.855359779 podStartE2EDuration="2.855359779s" podCreationTimestamp="2025-12-16 12:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:09.854632438 +0000 UTC m=+1423.572890253" watchObservedRunningTime="2025-12-16 12:19:09.855359779 +0000 UTC m=+1423.573617604" Dec 16 12:19:10 crc kubenswrapper[4805]: I1216 12:19:10.079562 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:10 crc kubenswrapper[4805]: I1216 12:19:10.322593 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 16 12:19:10 crc kubenswrapper[4805]: I1216 12:19:10.549803 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 16 12:19:12 crc kubenswrapper[4805]: I1216 12:19:12.900347 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0360fb3e-6835-48be-b3a9-8d56756d649d","Type":"ContainerStarted","Data":"4f437da5e8ed1509308ecb853cf093699562445ec02c72f8239e6c2dd0447c58"} Dec 16 12:19:12 crc kubenswrapper[4805]: I1216 12:19:12.903366 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98dd959b-802c-4ea8-8876-a9a56e7fbafe","Type":"ContainerStarted","Data":"e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031"} Dec 16 12:19:12 crc kubenswrapper[4805]: I1216 12:19:12.912402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"062ffbac-0f68-408c-ab5b-b5a986e2f333","Type":"ContainerStarted","Data":"3731a120f46127a16bdcf3ab434c8673fc1bdc4ff90b3f993d91ddf84fb38614"} Dec 16 12:19:12 crc kubenswrapper[4805]: I1216 12:19:12.915464 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78d52648-9036-4507-8f7a-0b1c7b2e51fc","Type":"ContainerStarted","Data":"4210058fc4020b555db97a4e614a35dbcb9525c26749ee760709d8f9c66c7218"} Dec 16 12:19:12 crc kubenswrapper[4805]: I1216 12:19:12.915837 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="78d52648-9036-4507-8f7a-0b1c7b2e51fc" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4210058fc4020b555db97a4e614a35dbcb9525c26749ee760709d8f9c66c7218" gracePeriod=30 Dec 16 12:19:12 crc kubenswrapper[4805]: I1216 12:19:12.938694 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.10435006 podStartE2EDuration="7.938663204s" podCreationTimestamp="2025-12-16 12:19:05 +0000 UTC" firstStartedPulling="2025-12-16 12:19:07.387106788 +0000 UTC m=+1421.105364593" lastFinishedPulling="2025-12-16 12:19:12.221419932 +0000 UTC m=+1425.939677737" observedRunningTime="2025-12-16 12:19:12.924456647 +0000 UTC m=+1426.642714472" watchObservedRunningTime="2025-12-16 12:19:12.938663204 +0000 UTC m=+1426.656921019" Dec 16 12:19:12 crc kubenswrapper[4805]: I1216 12:19:12.954309 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.114602125 podStartE2EDuration="7.954284662s" podCreationTimestamp="2025-12-16 12:19:05 +0000 UTC" firstStartedPulling="2025-12-16 12:19:07.364396297 +0000 UTC m=+1421.082654102" lastFinishedPulling="2025-12-16 12:19:12.204078824 +0000 UTC m=+1425.922336639" observedRunningTime="2025-12-16 12:19:12.940331912 +0000 UTC m=+1426.658589717" watchObservedRunningTime="2025-12-16 12:19:12.954284662 +0000 UTC m=+1426.672542487" Dec 16 12:19:14 crc kubenswrapper[4805]: I1216 12:19:14.038543 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0360fb3e-6835-48be-b3a9-8d56756d649d","Type":"ContainerStarted","Data":"6cae2baba5cd1fa47d78054e1b89eaff2e92164ef7c647f36c6fb3ee2113e315"} Dec 16 12:19:14 crc kubenswrapper[4805]: I1216 12:19:14.046128 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="062ffbac-0f68-408c-ab5b-b5a986e2f333" containerName="nova-metadata-log" containerID="cri-o://3731a120f46127a16bdcf3ab434c8673fc1bdc4ff90b3f993d91ddf84fb38614" gracePeriod=30 Dec 16 12:19:14 crc kubenswrapper[4805]: I1216 12:19:14.046476 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="062ffbac-0f68-408c-ab5b-b5a986e2f333" containerName="nova-metadata-metadata" containerID="cri-o://36f8b9ed82aba8bd9d6fb556c0cf3fb83e5ed923dccfd4e54b719fc9c97e99dc" gracePeriod=30 Dec 16 12:19:14 crc kubenswrapper[4805]: I1216 12:19:14.046567 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"062ffbac-0f68-408c-ab5b-b5a986e2f333","Type":"ContainerStarted","Data":"36f8b9ed82aba8bd9d6fb556c0cf3fb83e5ed923dccfd4e54b719fc9c97e99dc"} Dec 16 12:19:14 crc kubenswrapper[4805]: I1216 12:19:14.227892 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.920461037 podStartE2EDuration="9.227869464s" podCreationTimestamp="2025-12-16 12:19:05 +0000 UTC" firstStartedPulling="2025-12-16 12:19:06.929407036 +0000 UTC m=+1420.647664841" lastFinishedPulling="2025-12-16 12:19:12.236815463 +0000 UTC m=+1425.955073268" observedRunningTime="2025-12-16 12:19:14.07496125 +0000 UTC m=+1427.793219055" watchObservedRunningTime="2025-12-16 12:19:14.227869464 +0000 UTC m=+1427.946127279" Dec 16 12:19:14 crc kubenswrapper[4805]: I1216 12:19:14.252591 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.405896174 podStartE2EDuration="9.252566812s" podCreationTimestamp="2025-12-16 12:19:05 +0000 UTC" firstStartedPulling="2025-12-16 12:19:07.403005964 +0000 UTC m=+1421.121263769" lastFinishedPulling="2025-12-16 12:19:12.249676602 +0000 UTC m=+1425.967934407" observedRunningTime="2025-12-16 12:19:14.235509913 +0000 UTC m=+1427.953767728" watchObservedRunningTime="2025-12-16 12:19:14.252566812 +0000 UTC m=+1427.970824627" Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.069059 4805 generic.go:334] "Generic (PLEG): container finished" podID="062ffbac-0f68-408c-ab5b-b5a986e2f333" containerID="36f8b9ed82aba8bd9d6fb556c0cf3fb83e5ed923dccfd4e54b719fc9c97e99dc" exitCode=0 Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.069090 4805 generic.go:334] "Generic (PLEG): container finished" podID="062ffbac-0f68-408c-ab5b-b5a986e2f333" containerID="3731a120f46127a16bdcf3ab434c8673fc1bdc4ff90b3f993d91ddf84fb38614" exitCode=143 Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.070133 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"062ffbac-0f68-408c-ab5b-b5a986e2f333","Type":"ContainerDied","Data":"36f8b9ed82aba8bd9d6fb556c0cf3fb83e5ed923dccfd4e54b719fc9c97e99dc"} Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.070179 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"062ffbac-0f68-408c-ab5b-b5a986e2f333","Type":"ContainerDied","Data":"3731a120f46127a16bdcf3ab434c8673fc1bdc4ff90b3f993d91ddf84fb38614"} Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.329022 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.437779 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-config-data\") pod \"062ffbac-0f68-408c-ab5b-b5a986e2f333\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.437881 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/062ffbac-0f68-408c-ab5b-b5a986e2f333-logs\") pod \"062ffbac-0f68-408c-ab5b-b5a986e2f333\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.437965 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-combined-ca-bundle\") pod \"062ffbac-0f68-408c-ab5b-b5a986e2f333\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.438044 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6jpz\" (UniqueName: \"kubernetes.io/projected/062ffbac-0f68-408c-ab5b-b5a986e2f333-kube-api-access-s6jpz\") pod \"062ffbac-0f68-408c-ab5b-b5a986e2f333\" (UID: \"062ffbac-0f68-408c-ab5b-b5a986e2f333\") " Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.438108 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062ffbac-0f68-408c-ab5b-b5a986e2f333-logs" (OuterVolumeSpecName: "logs") pod "062ffbac-0f68-408c-ab5b-b5a986e2f333" (UID: "062ffbac-0f68-408c-ab5b-b5a986e2f333"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.438452 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/062ffbac-0f68-408c-ab5b-b5a986e2f333-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.444333 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062ffbac-0f68-408c-ab5b-b5a986e2f333-kube-api-access-s6jpz" (OuterVolumeSpecName: "kube-api-access-s6jpz") pod "062ffbac-0f68-408c-ab5b-b5a986e2f333" (UID: "062ffbac-0f68-408c-ab5b-b5a986e2f333"). InnerVolumeSpecName "kube-api-access-s6jpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.644661 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6jpz\" (UniqueName: \"kubernetes.io/projected/062ffbac-0f68-408c-ab5b-b5a986e2f333-kube-api-access-s6jpz\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.685400 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "062ffbac-0f68-408c-ab5b-b5a986e2f333" (UID: "062ffbac-0f68-408c-ab5b-b5a986e2f333"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.711129 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-config-data" (OuterVolumeSpecName: "config-data") pod "062ffbac-0f68-408c-ab5b-b5a986e2f333" (UID: "062ffbac-0f68-408c-ab5b-b5a986e2f333"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.747975 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.748003 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062ffbac-0f68-408c-ab5b-b5a986e2f333-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.841107 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 16 12:19:15 crc kubenswrapper[4805]: I1216 12:19:15.841487 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.088720 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.094357 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"062ffbac-0f68-408c-ab5b-b5a986e2f333","Type":"ContainerDied","Data":"86bed1c7bdaa05284ae57dad6f33c5e2f5410a1fb4c5bc6677919fccf0878628"} Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.094481 4805 scope.go:117] "RemoveContainer" containerID="36f8b9ed82aba8bd9d6fb556c0cf3fb83e5ed923dccfd4e54b719fc9c97e99dc" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.137330 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.159247 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.160417 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.168364 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.197322 4805 scope.go:117] "RemoveContainer" containerID="3731a120f46127a16bdcf3ab434c8673fc1bdc4ff90b3f993d91ddf84fb38614" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.198586 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:16 crc kubenswrapper[4805]: E1216 12:19:16.199122 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062ffbac-0f68-408c-ab5b-b5a986e2f333" containerName="nova-metadata-log" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.199148 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="062ffbac-0f68-408c-ab5b-b5a986e2f333" containerName="nova-metadata-log" Dec 16 12:19:16 crc kubenswrapper[4805]: E1216 12:19:16.199174 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062ffbac-0f68-408c-ab5b-b5a986e2f333" containerName="nova-metadata-metadata" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.199180 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="062ffbac-0f68-408c-ab5b-b5a986e2f333" containerName="nova-metadata-metadata" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.199412 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="062ffbac-0f68-408c-ab5b-b5a986e2f333" containerName="nova-metadata-log" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.199432 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="062ffbac-0f68-408c-ab5b-b5a986e2f333" containerName="nova-metadata-metadata" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.200751 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.204984 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.205292 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.218268 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.232482 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.283552 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.363377 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.363510 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8fjf\" (UniqueName: \"kubernetes.io/projected/96dc2db0-57bf-467a-a286-eef9f451ad89-kube-api-access-q8fjf\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.363612 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96dc2db0-57bf-467a-a286-eef9f451ad89-logs\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.363648 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.363782 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-config-data\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.465007 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-config-data\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.465155 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.465201 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8fjf\" (UniqueName: \"kubernetes.io/projected/96dc2db0-57bf-467a-a286-eef9f451ad89-kube-api-access-q8fjf\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.465239 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96dc2db0-57bf-467a-a286-eef9f451ad89-logs\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.465260 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.465764 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96dc2db0-57bf-467a-a286-eef9f451ad89-logs\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.470667 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-config-data\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.472077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.473648 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.499797 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8fjf\" (UniqueName: \"kubernetes.io/projected/96dc2db0-57bf-467a-a286-eef9f451ad89-kube-api-access-q8fjf\") pod \"nova-metadata-0\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.502552 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.569661 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.761027 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062ffbac-0f68-408c-ab5b-b5a986e2f333" path="/var/lib/kubelet/pods/062ffbac-0f68-408c-ab5b-b5a986e2f333/volumes" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.761773 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.761800 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q284g"] Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.761970 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-q284g" podUID="cf992c26-81d6-40e1-8b60-58c5fed5db64" containerName="dnsmasq-dns" containerID="cri-o://b7b93caecb8940d7db3e715aeb53d588e8de71266f7fa26698e9c38e87ea49c0" gracePeriod=10 Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.762114 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3c9ccbdb-6316-4956-bc60-507faeeba295" containerName="kube-state-metrics" containerID="cri-o://5a7c033e3c68fa08b3de14f0a991b917a86c474d7294998ebefb87d6e5ec34c5" gracePeriod=30 Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.933535 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:19:16 crc kubenswrapper[4805]: I1216 12:19:16.933931 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:19:17 crc kubenswrapper[4805]: I1216 12:19:17.121858 4805 generic.go:334] "Generic (PLEG): container finished" podID="3c9ccbdb-6316-4956-bc60-507faeeba295" containerID="5a7c033e3c68fa08b3de14f0a991b917a86c474d7294998ebefb87d6e5ec34c5" exitCode=2 Dec 16 12:19:17 crc kubenswrapper[4805]: I1216 12:19:17.122237 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c9ccbdb-6316-4956-bc60-507faeeba295","Type":"ContainerDied","Data":"5a7c033e3c68fa08b3de14f0a991b917a86c474d7294998ebefb87d6e5ec34c5"} Dec 16 12:19:17 crc kubenswrapper[4805]: I1216 12:19:17.245510 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 16 12:19:17 crc kubenswrapper[4805]: I1216 12:19:17.404604 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:17 crc kubenswrapper[4805]: E1216 12:19:17.486853 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf992c26_81d6_40e1_8b60_58c5fed5db64.slice/crio-b7b93caecb8940d7db3e715aeb53d588e8de71266f7fa26698e9c38e87ea49c0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf992c26_81d6_40e1_8b60_58c5fed5db64.slice/crio-conmon-b7b93caecb8940d7db3e715aeb53d588e8de71266f7fa26698e9c38e87ea49c0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c9ccbdb_6316_4956_bc60_507faeeba295.slice/crio-conmon-5a7c033e3c68fa08b3de14f0a991b917a86c474d7294998ebefb87d6e5ec34c5.scope\": RecentStats: unable to find data in memory cache]" Dec 16 12:19:18 crc kubenswrapper[4805]: I1216 12:19:18.195323 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96dc2db0-57bf-467a-a286-eef9f451ad89","Type":"ContainerStarted","Data":"3e6b9dfade0d2321d1a60c78cfc4c5935d48ff7e17f3dab01d271a6c280ba0aa"} Dec 16 12:19:18 crc kubenswrapper[4805]: I1216 12:19:18.200833 4805 generic.go:334] "Generic (PLEG): container finished" podID="cf992c26-81d6-40e1-8b60-58c5fed5db64" containerID="b7b93caecb8940d7db3e715aeb53d588e8de71266f7fa26698e9c38e87ea49c0" exitCode=0 Dec 16 12:19:18 crc kubenswrapper[4805]: I1216 12:19:18.201067 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q284g" event={"ID":"cf992c26-81d6-40e1-8b60-58c5fed5db64","Type":"ContainerDied","Data":"b7b93caecb8940d7db3e715aeb53d588e8de71266f7fa26698e9c38e87ea49c0"} Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.221684 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96dc2db0-57bf-467a-a286-eef9f451ad89","Type":"ContainerStarted","Data":"c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4"} Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.406467 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.457208 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.554975 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8np6\" (UniqueName: \"kubernetes.io/projected/3c9ccbdb-6316-4956-bc60-507faeeba295-kube-api-access-j8np6\") pod \"3c9ccbdb-6316-4956-bc60-507faeeba295\" (UID: \"3c9ccbdb-6316-4956-bc60-507faeeba295\") " Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.564358 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c9ccbdb-6316-4956-bc60-507faeeba295-kube-api-access-j8np6" (OuterVolumeSpecName: "kube-api-access-j8np6") pod "3c9ccbdb-6316-4956-bc60-507faeeba295" (UID: "3c9ccbdb-6316-4956-bc60-507faeeba295"). InnerVolumeSpecName "kube-api-access-j8np6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.657001 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-nb\") pod \"cf992c26-81d6-40e1-8b60-58c5fed5db64\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.657173 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-svc\") pod \"cf992c26-81d6-40e1-8b60-58c5fed5db64\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.657198 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-sb\") pod \"cf992c26-81d6-40e1-8b60-58c5fed5db64\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.657252 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8ckb\" (UniqueName: \"kubernetes.io/projected/cf992c26-81d6-40e1-8b60-58c5fed5db64-kube-api-access-k8ckb\") pod \"cf992c26-81d6-40e1-8b60-58c5fed5db64\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.657288 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-config\") pod \"cf992c26-81d6-40e1-8b60-58c5fed5db64\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.657320 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-swift-storage-0\") pod \"cf992c26-81d6-40e1-8b60-58c5fed5db64\" (UID: \"cf992c26-81d6-40e1-8b60-58c5fed5db64\") " Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.669606 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8np6\" (UniqueName: \"kubernetes.io/projected/3c9ccbdb-6316-4956-bc60-507faeeba295-kube-api-access-j8np6\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.740250 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf992c26-81d6-40e1-8b60-58c5fed5db64-kube-api-access-k8ckb" (OuterVolumeSpecName: "kube-api-access-k8ckb") pod "cf992c26-81d6-40e1-8b60-58c5fed5db64" (UID: "cf992c26-81d6-40e1-8b60-58c5fed5db64"). InnerVolumeSpecName "kube-api-access-k8ckb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.767658 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf992c26-81d6-40e1-8b60-58c5fed5db64" (UID: "cf992c26-81d6-40e1-8b60-58c5fed5db64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.772099 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.772348 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8ckb\" (UniqueName: \"kubernetes.io/projected/cf992c26-81d6-40e1-8b60-58c5fed5db64-kube-api-access-k8ckb\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.783718 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf992c26-81d6-40e1-8b60-58c5fed5db64" (UID: "cf992c26-81d6-40e1-8b60-58c5fed5db64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.792323 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-config" (OuterVolumeSpecName: "config") pod "cf992c26-81d6-40e1-8b60-58c5fed5db64" (UID: "cf992c26-81d6-40e1-8b60-58c5fed5db64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.802944 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf992c26-81d6-40e1-8b60-58c5fed5db64" (UID: "cf992c26-81d6-40e1-8b60-58c5fed5db64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.812095 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cf992c26-81d6-40e1-8b60-58c5fed5db64" (UID: "cf992c26-81d6-40e1-8b60-58c5fed5db64"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.874545 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.874883 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.874994 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:19 crc kubenswrapper[4805]: I1216 12:19:19.875081 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf992c26-81d6-40e1-8b60-58c5fed5db64-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.233952 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-q284g" event={"ID":"cf992c26-81d6-40e1-8b60-58c5fed5db64","Type":"ContainerDied","Data":"3a61505c8346117b6f91d120d535b9b098fe6ea52580fc353e15ed5f71225e4a"} Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.234063 4805 scope.go:117] "RemoveContainer" containerID="b7b93caecb8940d7db3e715aeb53d588e8de71266f7fa26698e9c38e87ea49c0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.234327 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-q284g" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.236005 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c9ccbdb-6316-4956-bc60-507faeeba295","Type":"ContainerDied","Data":"b3c8e2a8add0149944415a4f4f02c5881b842bd3366e15259ad9fab9b6ac20b4"} Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.236073 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.239831 4805 generic.go:334] "Generic (PLEG): container finished" podID="73a7a5f1-017b-4be6-b498-9c9d7416fada" containerID="63ecef3c06292f6c8645c7058de4184a348d677eae434d60d79d38f100662248" exitCode=0 Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.239888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c26fd" event={"ID":"73a7a5f1-017b-4be6-b498-9c9d7416fada","Type":"ContainerDied","Data":"63ecef3c06292f6c8645c7058de4184a348d677eae434d60d79d38f100662248"} Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.241665 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96dc2db0-57bf-467a-a286-eef9f451ad89","Type":"ContainerStarted","Data":"86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205"} Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.271081 4805 scope.go:117] "RemoveContainer" containerID="84dea3ad9d2e1f5c90a4d6f2c4dd6e5c859b9c223427dde8dadac313faea73f7" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.305210 4805 scope.go:117] "RemoveContainer" containerID="5a7c033e3c68fa08b3de14f0a991b917a86c474d7294998ebefb87d6e5ec34c5" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.311706 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.311678298 podStartE2EDuration="4.311678298s" podCreationTimestamp="2025-12-16 12:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:20.304649717 +0000 UTC m=+1434.022907522" watchObservedRunningTime="2025-12-16 12:19:20.311678298 +0000 UTC m=+1434.029936123" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.345354 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.365390 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.383493 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q284g"] Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.391685 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 16 12:19:20 crc kubenswrapper[4805]: E1216 12:19:20.392469 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf992c26-81d6-40e1-8b60-58c5fed5db64" containerName="dnsmasq-dns" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.392481 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf992c26-81d6-40e1-8b60-58c5fed5db64" containerName="dnsmasq-dns" Dec 16 12:19:20 crc kubenswrapper[4805]: E1216 12:19:20.392518 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf992c26-81d6-40e1-8b60-58c5fed5db64" containerName="init" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.392525 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf992c26-81d6-40e1-8b60-58c5fed5db64" containerName="init" Dec 16 12:19:20 crc kubenswrapper[4805]: E1216 12:19:20.392543 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c9ccbdb-6316-4956-bc60-507faeeba295" containerName="kube-state-metrics" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.392549 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9ccbdb-6316-4956-bc60-507faeeba295" containerName="kube-state-metrics" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.392777 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf992c26-81d6-40e1-8b60-58c5fed5db64" containerName="dnsmasq-dns" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.392789 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c9ccbdb-6316-4956-bc60-507faeeba295" containerName="kube-state-metrics" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.393531 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.397588 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.402476 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.402503 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-q284g"] Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.427766 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.489952 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e3d98762-b270-4f16-8dce-26f0662152ad-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.490045 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3d98762-b270-4f16-8dce-26f0662152ad-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.490114 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d98762-b270-4f16-8dce-26f0662152ad-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.490158 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxv75\" (UniqueName: \"kubernetes.io/projected/e3d98762-b270-4f16-8dce-26f0662152ad-kube-api-access-nxv75\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.534375 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c9ccbdb-6316-4956-bc60-507faeeba295" path="/var/lib/kubelet/pods/3c9ccbdb-6316-4956-bc60-507faeeba295/volumes" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.536045 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf992c26-81d6-40e1-8b60-58c5fed5db64" path="/var/lib/kubelet/pods/cf992c26-81d6-40e1-8b60-58c5fed5db64/volumes" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.592216 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e3d98762-b270-4f16-8dce-26f0662152ad-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.592306 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3d98762-b270-4f16-8dce-26f0662152ad-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.592368 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d98762-b270-4f16-8dce-26f0662152ad-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.592424 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxv75\" (UniqueName: \"kubernetes.io/projected/e3d98762-b270-4f16-8dce-26f0662152ad-kube-api-access-nxv75\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.599154 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d98762-b270-4f16-8dce-26f0662152ad-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.611817 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e3d98762-b270-4f16-8dce-26f0662152ad-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.614559 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3d98762-b270-4f16-8dce-26f0662152ad-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.615848 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxv75\" (UniqueName: \"kubernetes.io/projected/e3d98762-b270-4f16-8dce-26f0662152ad-kube-api-access-nxv75\") pod \"kube-state-metrics-0\" (UID: \"e3d98762-b270-4f16-8dce-26f0662152ad\") " pod="openstack/kube-state-metrics-0" Dec 16 12:19:20 crc kubenswrapper[4805]: I1216 12:19:20.730361 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.207064 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.207880 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="ceilometer-central-agent" containerID="cri-o://1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05" gracePeriod=30 Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.208468 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="proxy-httpd" containerID="cri-o://d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e" gracePeriod=30 Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.208547 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="sg-core" containerID="cri-o://e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3" gracePeriod=30 Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.208603 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="ceilometer-notification-agent" containerID="cri-o://66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3" gracePeriod=30 Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.256300 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.571907 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.572071 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.767010 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.930907 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-scripts\") pod \"73a7a5f1-017b-4be6-b498-9c9d7416fada\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.931050 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmsn7\" (UniqueName: \"kubernetes.io/projected/73a7a5f1-017b-4be6-b498-9c9d7416fada-kube-api-access-wmsn7\") pod \"73a7a5f1-017b-4be6-b498-9c9d7416fada\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.931208 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-config-data\") pod \"73a7a5f1-017b-4be6-b498-9c9d7416fada\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.931245 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-combined-ca-bundle\") pod \"73a7a5f1-017b-4be6-b498-9c9d7416fada\" (UID: \"73a7a5f1-017b-4be6-b498-9c9d7416fada\") " Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.936629 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-scripts" (OuterVolumeSpecName: "scripts") pod "73a7a5f1-017b-4be6-b498-9c9d7416fada" (UID: "73a7a5f1-017b-4be6-b498-9c9d7416fada"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:21 crc kubenswrapper[4805]: I1216 12:19:21.939366 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a7a5f1-017b-4be6-b498-9c9d7416fada-kube-api-access-wmsn7" (OuterVolumeSpecName: "kube-api-access-wmsn7") pod "73a7a5f1-017b-4be6-b498-9c9d7416fada" (UID: "73a7a5f1-017b-4be6-b498-9c9d7416fada"). InnerVolumeSpecName "kube-api-access-wmsn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.013286 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73a7a5f1-017b-4be6-b498-9c9d7416fada" (UID: "73a7a5f1-017b-4be6-b498-9c9d7416fada"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.020852 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-config-data" (OuterVolumeSpecName: "config-data") pod "73a7a5f1-017b-4be6-b498-9c9d7416fada" (UID: "73a7a5f1-017b-4be6-b498-9c9d7416fada"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.033919 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.033958 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.033971 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73a7a5f1-017b-4be6-b498-9c9d7416fada-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.033983 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmsn7\" (UniqueName: \"kubernetes.io/projected/73a7a5f1-017b-4be6-b498-9c9d7416fada-kube-api-access-wmsn7\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.307387 4805 generic.go:334] "Generic (PLEG): container finished" podID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerID="d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e" exitCode=0 Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.307416 4805 generic.go:334] "Generic (PLEG): container finished" podID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerID="e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3" exitCode=2 Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.307424 4805 generic.go:334] "Generic (PLEG): container finished" podID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerID="1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05" exitCode=0 Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.307434 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0301416f-6de4-41bd-86e5-c6861d64ee07","Type":"ContainerDied","Data":"d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e"} Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.307520 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0301416f-6de4-41bd-86e5-c6861d64ee07","Type":"ContainerDied","Data":"e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3"} Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.307535 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0301416f-6de4-41bd-86e5-c6861d64ee07","Type":"ContainerDied","Data":"1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05"} Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.309321 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c26fd" event={"ID":"73a7a5f1-017b-4be6-b498-9c9d7416fada","Type":"ContainerDied","Data":"e0c6bb0f781f2dd15665b698b4266101648ca80ef52e6e4ea0b8820f506bc22c"} Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.309357 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c26fd" Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.309365 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0c6bb0f781f2dd15665b698b4266101648ca80ef52e6e4ea0b8820f506bc22c" Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.316754 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e3d98762-b270-4f16-8dce-26f0662152ad","Type":"ContainerStarted","Data":"de9b9eb3be6d91b441ed1cd033aaf3f1af2ed6e6c0d9dd4f696f1ad020a6c600"} Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.316789 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e3d98762-b270-4f16-8dce-26f0662152ad","Type":"ContainerStarted","Data":"7f9616eee98dff98bebfd88dd517f07037953025e42f7649a5d8cf9d74a1a40a"} Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.316823 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.350546 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.861356526 podStartE2EDuration="2.35051335s" podCreationTimestamp="2025-12-16 12:19:20 +0000 UTC" firstStartedPulling="2025-12-16 12:19:21.268373046 +0000 UTC m=+1434.986630851" lastFinishedPulling="2025-12-16 12:19:21.75752987 +0000 UTC m=+1435.475787675" observedRunningTime="2025-12-16 12:19:22.339455003 +0000 UTC m=+1436.057712808" watchObservedRunningTime="2025-12-16 12:19:22.35051335 +0000 UTC m=+1436.068771165" Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.444529 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.444816 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerName="nova-api-log" containerID="cri-o://4f437da5e8ed1509308ecb853cf093699562445ec02c72f8239e6c2dd0447c58" gracePeriod=30 Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.444962 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerName="nova-api-api" containerID="cri-o://6cae2baba5cd1fa47d78054e1b89eaff2e92164ef7c647f36c6fb3ee2113e315" gracePeriod=30 Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.457324 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.457820 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="98dd959b-802c-4ea8-8876-a9a56e7fbafe" containerName="nova-scheduler-scheduler" containerID="cri-o://e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031" gracePeriod=30 Dec 16 12:19:22 crc kubenswrapper[4805]: I1216 12:19:22.473445 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:23 crc kubenswrapper[4805]: I1216 12:19:23.332330 4805 generic.go:334] "Generic (PLEG): container finished" podID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerID="4f437da5e8ed1509308ecb853cf093699562445ec02c72f8239e6c2dd0447c58" exitCode=143 Dec 16 12:19:23 crc kubenswrapper[4805]: I1216 12:19:23.333470 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96dc2db0-57bf-467a-a286-eef9f451ad89" containerName="nova-metadata-log" containerID="cri-o://c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4" gracePeriod=30 Dec 16 12:19:23 crc kubenswrapper[4805]: I1216 12:19:23.332459 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0360fb3e-6835-48be-b3a9-8d56756d649d","Type":"ContainerDied","Data":"4f437da5e8ed1509308ecb853cf093699562445ec02c72f8239e6c2dd0447c58"} Dec 16 12:19:23 crc kubenswrapper[4805]: I1216 12:19:23.334566 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="96dc2db0-57bf-467a-a286-eef9f451ad89" containerName="nova-metadata-metadata" containerID="cri-o://86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205" gracePeriod=30 Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.003522 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.100086 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-config-data\") pod \"96dc2db0-57bf-467a-a286-eef9f451ad89\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.100278 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8fjf\" (UniqueName: \"kubernetes.io/projected/96dc2db0-57bf-467a-a286-eef9f451ad89-kube-api-access-q8fjf\") pod \"96dc2db0-57bf-467a-a286-eef9f451ad89\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.100331 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-nova-metadata-tls-certs\") pod \"96dc2db0-57bf-467a-a286-eef9f451ad89\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.100384 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96dc2db0-57bf-467a-a286-eef9f451ad89-logs\") pod \"96dc2db0-57bf-467a-a286-eef9f451ad89\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.100516 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-combined-ca-bundle\") pod \"96dc2db0-57bf-467a-a286-eef9f451ad89\" (UID: \"96dc2db0-57bf-467a-a286-eef9f451ad89\") " Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.101523 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96dc2db0-57bf-467a-a286-eef9f451ad89-logs" (OuterVolumeSpecName: "logs") pod "96dc2db0-57bf-467a-a286-eef9f451ad89" (UID: "96dc2db0-57bf-467a-a286-eef9f451ad89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.102303 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96dc2db0-57bf-467a-a286-eef9f451ad89-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.108435 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96dc2db0-57bf-467a-a286-eef9f451ad89-kube-api-access-q8fjf" (OuterVolumeSpecName: "kube-api-access-q8fjf") pod "96dc2db0-57bf-467a-a286-eef9f451ad89" (UID: "96dc2db0-57bf-467a-a286-eef9f451ad89"). InnerVolumeSpecName "kube-api-access-q8fjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.129257 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96dc2db0-57bf-467a-a286-eef9f451ad89" (UID: "96dc2db0-57bf-467a-a286-eef9f451ad89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.137424 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-config-data" (OuterVolumeSpecName: "config-data") pod "96dc2db0-57bf-467a-a286-eef9f451ad89" (UID: "96dc2db0-57bf-467a-a286-eef9f451ad89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.145084 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.184845 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "96dc2db0-57bf-467a-a286-eef9f451ad89" (UID: "96dc2db0-57bf-467a-a286-eef9f451ad89"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.203469 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2wzk\" (UniqueName: \"kubernetes.io/projected/98dd959b-802c-4ea8-8876-a9a56e7fbafe-kube-api-access-b2wzk\") pod \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.203629 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-combined-ca-bundle\") pod \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.203742 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-config-data\") pod \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\" (UID: \"98dd959b-802c-4ea8-8876-a9a56e7fbafe\") " Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.205721 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8fjf\" (UniqueName: \"kubernetes.io/projected/96dc2db0-57bf-467a-a286-eef9f451ad89-kube-api-access-q8fjf\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.205756 4805 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.205774 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.205787 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96dc2db0-57bf-467a-a286-eef9f451ad89-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.210325 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98dd959b-802c-4ea8-8876-a9a56e7fbafe-kube-api-access-b2wzk" (OuterVolumeSpecName: "kube-api-access-b2wzk") pod "98dd959b-802c-4ea8-8876-a9a56e7fbafe" (UID: "98dd959b-802c-4ea8-8876-a9a56e7fbafe"). InnerVolumeSpecName "kube-api-access-b2wzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.233798 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-config-data" (OuterVolumeSpecName: "config-data") pod "98dd959b-802c-4ea8-8876-a9a56e7fbafe" (UID: "98dd959b-802c-4ea8-8876-a9a56e7fbafe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.237259 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98dd959b-802c-4ea8-8876-a9a56e7fbafe" (UID: "98dd959b-802c-4ea8-8876-a9a56e7fbafe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.308562 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.308604 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98dd959b-802c-4ea8-8876-a9a56e7fbafe-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.308618 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2wzk\" (UniqueName: \"kubernetes.io/projected/98dd959b-802c-4ea8-8876-a9a56e7fbafe-kube-api-access-b2wzk\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.343912 4805 generic.go:334] "Generic (PLEG): container finished" podID="98dd959b-802c-4ea8-8876-a9a56e7fbafe" containerID="e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031" exitCode=0 Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.343976 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.344006 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98dd959b-802c-4ea8-8876-a9a56e7fbafe","Type":"ContainerDied","Data":"e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031"} Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.344448 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98dd959b-802c-4ea8-8876-a9a56e7fbafe","Type":"ContainerDied","Data":"c422f2212c22176ad6a9050c8b88e458372c81dd5e0fee195fc3f93ef8bb0865"} Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.344524 4805 scope.go:117] "RemoveContainer" containerID="e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.346279 4805 generic.go:334] "Generic (PLEG): container finished" podID="96dc2db0-57bf-467a-a286-eef9f451ad89" containerID="86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205" exitCode=0 Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.346314 4805 generic.go:334] "Generic (PLEG): container finished" podID="96dc2db0-57bf-467a-a286-eef9f451ad89" containerID="c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4" exitCode=143 Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.346337 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96dc2db0-57bf-467a-a286-eef9f451ad89","Type":"ContainerDied","Data":"86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205"} Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.346355 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.346358 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96dc2db0-57bf-467a-a286-eef9f451ad89","Type":"ContainerDied","Data":"c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4"} Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.346542 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96dc2db0-57bf-467a-a286-eef9f451ad89","Type":"ContainerDied","Data":"3e6b9dfade0d2321d1a60c78cfc4c5935d48ff7e17f3dab01d271a6c280ba0aa"} Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.369265 4805 scope.go:117] "RemoveContainer" containerID="e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031" Dec 16 12:19:24 crc kubenswrapper[4805]: E1216 12:19:24.369800 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031\": container with ID starting with e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031 not found: ID does not exist" containerID="e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.369863 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031"} err="failed to get container status \"e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031\": rpc error: code = NotFound desc = could not find container \"e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031\": container with ID starting with e36041000a33cefd479e2373100245ed46e36ae149a1c0205d64e300d8792031 not found: ID does not exist" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.369884 4805 scope.go:117] "RemoveContainer" containerID="86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.578519 4805 scope.go:117] "RemoveContainer" containerID="c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.631073 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.643591 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.643860 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:19:24 crc kubenswrapper[4805]: E1216 12:19:24.651601 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dd959b-802c-4ea8-8876-a9a56e7fbafe" containerName="nova-scheduler-scheduler" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.652263 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dd959b-802c-4ea8-8876-a9a56e7fbafe" containerName="nova-scheduler-scheduler" Dec 16 12:19:24 crc kubenswrapper[4805]: E1216 12:19:24.657393 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96dc2db0-57bf-467a-a286-eef9f451ad89" containerName="nova-metadata-log" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.657427 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="96dc2db0-57bf-467a-a286-eef9f451ad89" containerName="nova-metadata-log" Dec 16 12:19:24 crc kubenswrapper[4805]: E1216 12:19:24.657455 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a7a5f1-017b-4be6-b498-9c9d7416fada" containerName="nova-manage" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.657461 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a7a5f1-017b-4be6-b498-9c9d7416fada" containerName="nova-manage" Dec 16 12:19:24 crc kubenswrapper[4805]: E1216 12:19:24.657493 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96dc2db0-57bf-467a-a286-eef9f451ad89" containerName="nova-metadata-metadata" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.657499 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="96dc2db0-57bf-467a-a286-eef9f451ad89" containerName="nova-metadata-metadata" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.657880 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="96dc2db0-57bf-467a-a286-eef9f451ad89" containerName="nova-metadata-log" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.657898 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="96dc2db0-57bf-467a-a286-eef9f451ad89" containerName="nova-metadata-metadata" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.657909 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dd959b-802c-4ea8-8876-a9a56e7fbafe" containerName="nova-scheduler-scheduler" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.657920 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="73a7a5f1-017b-4be6-b498-9c9d7416fada" containerName="nova-manage" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.658642 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.658667 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.658682 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.658694 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.659544 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.659967 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.659892 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.663450 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.663681 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.663810 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.703363 4805 scope.go:117] "RemoveContainer" containerID="86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205" Dec 16 12:19:24 crc kubenswrapper[4805]: E1216 12:19:24.704750 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205\": container with ID starting with 86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205 not found: ID does not exist" containerID="86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.704792 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205"} err="failed to get container status \"86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205\": rpc error: code = NotFound desc = could not find container \"86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205\": container with ID starting with 86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205 not found: ID does not exist" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.704819 4805 scope.go:117] "RemoveContainer" containerID="c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4" Dec 16 12:19:24 crc kubenswrapper[4805]: E1216 12:19:24.705074 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4\": container with ID starting with c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4 not found: ID does not exist" containerID="c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.705092 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4"} err="failed to get container status \"c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4\": rpc error: code = NotFound desc = could not find container \"c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4\": container with ID starting with c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4 not found: ID does not exist" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.705104 4805 scope.go:117] "RemoveContainer" containerID="86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.705390 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205"} err="failed to get container status \"86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205\": rpc error: code = NotFound desc = could not find container \"86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205\": container with ID starting with 86ed143f8f78c499323fd9120178413c39c220db62384a09029b720d9eba9205 not found: ID does not exist" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.705405 4805 scope.go:117] "RemoveContainer" containerID="c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.705572 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4"} err="failed to get container status \"c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4\": rpc error: code = NotFound desc = could not find container \"c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4\": container with ID starting with c88d2316f55682871ed7959cf75cf866af0ef524214a5c44948782d8ee42a5b4 not found: ID does not exist" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.843692 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-config-data\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.843915 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.844074 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.844126 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtlx4\" (UniqueName: \"kubernetes.io/projected/81e50677-0733-4d96-97a5-6c2b12ecef0c-kube-api-access-dtlx4\") pod \"nova-scheduler-0\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.844303 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf5e17d-9fef-467a-97a6-6a02b7922808-logs\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.844357 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.844503 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-config-data\") pod \"nova-scheduler-0\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.844531 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwr9j\" (UniqueName: \"kubernetes.io/projected/8cf5e17d-9fef-467a-97a6-6a02b7922808-kube-api-access-jwr9j\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.946395 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-config-data\") pod \"nova-scheduler-0\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.946712 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwr9j\" (UniqueName: \"kubernetes.io/projected/8cf5e17d-9fef-467a-97a6-6a02b7922808-kube-api-access-jwr9j\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.946758 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-config-data\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.946781 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.946829 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.946853 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtlx4\" (UniqueName: \"kubernetes.io/projected/81e50677-0733-4d96-97a5-6c2b12ecef0c-kube-api-access-dtlx4\") pod \"nova-scheduler-0\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.946918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf5e17d-9fef-467a-97a6-6a02b7922808-logs\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.946939 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.951727 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf5e17d-9fef-467a-97a6-6a02b7922808-logs\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.954779 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.957608 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.957925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-config-data\") pod \"nova-scheduler-0\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.958368 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-config-data\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.961132 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.975181 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtlx4\" (UniqueName: \"kubernetes.io/projected/81e50677-0733-4d96-97a5-6c2b12ecef0c-kube-api-access-dtlx4\") pod \"nova-scheduler-0\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " pod="openstack/nova-scheduler-0" Dec 16 12:19:24 crc kubenswrapper[4805]: I1216 12:19:24.975365 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwr9j\" (UniqueName: \"kubernetes.io/projected/8cf5e17d-9fef-467a-97a6-6a02b7922808-kube-api-access-jwr9j\") pod \"nova-metadata-0\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " pod="openstack/nova-metadata-0" Dec 16 12:19:25 crc kubenswrapper[4805]: I1216 12:19:25.004628 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:19:25 crc kubenswrapper[4805]: I1216 12:19:25.050644 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 16 12:19:25 crc kubenswrapper[4805]: I1216 12:19:25.507056 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:19:25 crc kubenswrapper[4805]: I1216 12:19:25.619247 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.380715 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81e50677-0733-4d96-97a5-6c2b12ecef0c","Type":"ContainerStarted","Data":"be95fd97bccd3e1e886dc276863dc8703f2b7e775aba0fb4cce7a45adf2df080"} Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.381326 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81e50677-0733-4d96-97a5-6c2b12ecef0c","Type":"ContainerStarted","Data":"d281c79087136fda5ff3fe4d225529df2c4017ecc2747922f323c46fb7bf1c84"} Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.393283 4805 generic.go:334] "Generic (PLEG): container finished" podID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerID="6cae2baba5cd1fa47d78054e1b89eaff2e92164ef7c647f36c6fb3ee2113e315" exitCode=0 Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.393398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0360fb3e-6835-48be-b3a9-8d56756d649d","Type":"ContainerDied","Data":"6cae2baba5cd1fa47d78054e1b89eaff2e92164ef7c647f36c6fb3ee2113e315"} Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.405448 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cf5e17d-9fef-467a-97a6-6a02b7922808","Type":"ContainerStarted","Data":"e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e"} Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.405500 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cf5e17d-9fef-467a-97a6-6a02b7922808","Type":"ContainerStarted","Data":"5c1314e0fbba5babeee16a7cb04fb18ca7c31ae30e2bd79237b4dfaed13726c2"} Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.405987 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.405950894 podStartE2EDuration="2.405950894s" podCreationTimestamp="2025-12-16 12:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:26.404730449 +0000 UTC m=+1440.122988264" watchObservedRunningTime="2025-12-16 12:19:26.405950894 +0000 UTC m=+1440.124208719" Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.546878 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96dc2db0-57bf-467a-a286-eef9f451ad89" path="/var/lib/kubelet/pods/96dc2db0-57bf-467a-a286-eef9f451ad89/volumes" Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.549122 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98dd959b-802c-4ea8-8876-a9a56e7fbafe" path="/var/lib/kubelet/pods/98dd959b-802c-4ea8-8876-a9a56e7fbafe/volumes" Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.623347 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.634178 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-config-data\") pod \"0360fb3e-6835-48be-b3a9-8d56756d649d\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.634282 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0360fb3e-6835-48be-b3a9-8d56756d649d-logs\") pod \"0360fb3e-6835-48be-b3a9-8d56756d649d\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.634336 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-combined-ca-bundle\") pod \"0360fb3e-6835-48be-b3a9-8d56756d649d\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.634361 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqzw5\" (UniqueName: \"kubernetes.io/projected/0360fb3e-6835-48be-b3a9-8d56756d649d-kube-api-access-rqzw5\") pod \"0360fb3e-6835-48be-b3a9-8d56756d649d\" (UID: \"0360fb3e-6835-48be-b3a9-8d56756d649d\") " Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.635062 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0360fb3e-6835-48be-b3a9-8d56756d649d-logs" (OuterVolumeSpecName: "logs") pod "0360fb3e-6835-48be-b3a9-8d56756d649d" (UID: "0360fb3e-6835-48be-b3a9-8d56756d649d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.640103 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0360fb3e-6835-48be-b3a9-8d56756d649d-kube-api-access-rqzw5" (OuterVolumeSpecName: "kube-api-access-rqzw5") pod "0360fb3e-6835-48be-b3a9-8d56756d649d" (UID: "0360fb3e-6835-48be-b3a9-8d56756d649d"). InnerVolumeSpecName "kube-api-access-rqzw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.679317 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-config-data" (OuterVolumeSpecName: "config-data") pod "0360fb3e-6835-48be-b3a9-8d56756d649d" (UID: "0360fb3e-6835-48be-b3a9-8d56756d649d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.691261 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0360fb3e-6835-48be-b3a9-8d56756d649d" (UID: "0360fb3e-6835-48be-b3a9-8d56756d649d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.737490 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.737525 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0360fb3e-6835-48be-b3a9-8d56756d649d-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.737533 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0360fb3e-6835-48be-b3a9-8d56756d649d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:26 crc kubenswrapper[4805]: I1216 12:19:26.737545 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqzw5\" (UniqueName: \"kubernetes.io/projected/0360fb3e-6835-48be-b3a9-8d56756d649d-kube-api-access-rqzw5\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.418449 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0360fb3e-6835-48be-b3a9-8d56756d649d","Type":"ContainerDied","Data":"557da7a8151ca12b0ff921b39845c38e971eae393375d1f3a1d4ca6cf97cf867"} Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.418762 4805 scope.go:117] "RemoveContainer" containerID="6cae2baba5cd1fa47d78054e1b89eaff2e92164ef7c647f36c6fb3ee2113e315" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.418927 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.425911 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cf5e17d-9fef-467a-97a6-6a02b7922808","Type":"ContainerStarted","Data":"8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb"} Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.457088 4805 scope.go:117] "RemoveContainer" containerID="4f437da5e8ed1509308ecb853cf093699562445ec02c72f8239e6c2dd0447c58" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.481650 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.481630372 podStartE2EDuration="3.481630372s" podCreationTimestamp="2025-12-16 12:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:27.446576087 +0000 UTC m=+1441.164833902" watchObservedRunningTime="2025-12-16 12:19:27.481630372 +0000 UTC m=+1441.199888187" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.522749 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.543194 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.554605 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:27 crc kubenswrapper[4805]: E1216 12:19:27.555127 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerName="nova-api-api" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.555664 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerName="nova-api-api" Dec 16 12:19:27 crc kubenswrapper[4805]: E1216 12:19:27.555698 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerName="nova-api-log" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.555704 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerName="nova-api-log" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.555897 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerName="nova-api-log" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.555918 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0360fb3e-6835-48be-b3a9-8d56756d649d" containerName="nova-api-api" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.557177 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.564756 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.573010 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.662313 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf2s6\" (UniqueName: \"kubernetes.io/projected/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-kube-api-access-xf2s6\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.662714 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-config-data\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.662856 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-logs\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.663014 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.796701 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf2s6\" (UniqueName: \"kubernetes.io/projected/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-kube-api-access-xf2s6\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.797038 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-config-data\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.797190 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-logs\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.797282 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.798280 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-logs\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.807518 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-config-data\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.809894 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.819830 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf2s6\" (UniqueName: \"kubernetes.io/projected/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-kube-api-access-xf2s6\") pod \"nova-api-0\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " pod="openstack/nova-api-0" Dec 16 12:19:27 crc kubenswrapper[4805]: I1216 12:19:27.886960 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.361970 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.383884 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.461464 4805 generic.go:334] "Generic (PLEG): container finished" podID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerID="66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3" exitCode=0 Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.461576 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0301416f-6de4-41bd-86e5-c6861d64ee07","Type":"ContainerDied","Data":"66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3"} Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.461627 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0301416f-6de4-41bd-86e5-c6861d64ee07","Type":"ContainerDied","Data":"f1aa0f00024596a535bf1bebba63caffa469bc7f48493d5ec83e2b3c3dd524c7"} Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.461626 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.461656 4805 scope.go:117] "RemoveContainer" containerID="d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.465062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbde2b12-518d-4b30-8fb6-85dd735c0a9c","Type":"ContainerStarted","Data":"2d26f57e11d4142cbb09bd8f46de18e57b6d65de1a5117069da1af7d70b65db4"} Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.510964 4805 scope.go:117] "RemoveContainer" containerID="e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.512803 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-sg-core-conf-yaml\") pod \"0301416f-6de4-41bd-86e5-c6861d64ee07\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.512936 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqw4k\" (UniqueName: \"kubernetes.io/projected/0301416f-6de4-41bd-86e5-c6861d64ee07-kube-api-access-rqw4k\") pod \"0301416f-6de4-41bd-86e5-c6861d64ee07\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.513005 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-combined-ca-bundle\") pod \"0301416f-6de4-41bd-86e5-c6861d64ee07\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.513420 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-run-httpd\") pod \"0301416f-6de4-41bd-86e5-c6861d64ee07\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.513449 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-config-data\") pod \"0301416f-6de4-41bd-86e5-c6861d64ee07\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.513520 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-scripts\") pod \"0301416f-6de4-41bd-86e5-c6861d64ee07\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.513560 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-log-httpd\") pod \"0301416f-6de4-41bd-86e5-c6861d64ee07\" (UID: \"0301416f-6de4-41bd-86e5-c6861d64ee07\") " Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.514139 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0301416f-6de4-41bd-86e5-c6861d64ee07" (UID: "0301416f-6de4-41bd-86e5-c6861d64ee07"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.514485 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0301416f-6de4-41bd-86e5-c6861d64ee07" (UID: "0301416f-6de4-41bd-86e5-c6861d64ee07"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.523506 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-scripts" (OuterVolumeSpecName: "scripts") pod "0301416f-6de4-41bd-86e5-c6861d64ee07" (UID: "0301416f-6de4-41bd-86e5-c6861d64ee07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.529963 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0301416f-6de4-41bd-86e5-c6861d64ee07-kube-api-access-rqw4k" (OuterVolumeSpecName: "kube-api-access-rqw4k") pod "0301416f-6de4-41bd-86e5-c6861d64ee07" (UID: "0301416f-6de4-41bd-86e5-c6861d64ee07"). InnerVolumeSpecName "kube-api-access-rqw4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.548035 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0360fb3e-6835-48be-b3a9-8d56756d649d" path="/var/lib/kubelet/pods/0360fb3e-6835-48be-b3a9-8d56756d649d/volumes" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.564600 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0301416f-6de4-41bd-86e5-c6861d64ee07" (UID: "0301416f-6de4-41bd-86e5-c6861d64ee07"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.567547 4805 scope.go:117] "RemoveContainer" containerID="66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.593923 4805 scope.go:117] "RemoveContainer" containerID="1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.616555 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.616591 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.616604 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.616618 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqw4k\" (UniqueName: \"kubernetes.io/projected/0301416f-6de4-41bd-86e5-c6861d64ee07-kube-api-access-rqw4k\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.616630 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0301416f-6de4-41bd-86e5-c6861d64ee07-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.628003 4805 scope.go:117] "RemoveContainer" containerID="d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e" Dec 16 12:19:28 crc kubenswrapper[4805]: E1216 12:19:28.629198 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e\": container with ID starting with d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e not found: ID does not exist" containerID="d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.629237 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e"} err="failed to get container status \"d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e\": rpc error: code = NotFound desc = could not find container \"d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e\": container with ID starting with d75581027b22c7767d415dc5c770cea8713dc773665ad24a05e7fffa9dd1e51e not found: ID does not exist" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.629261 4805 scope.go:117] "RemoveContainer" containerID="e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3" Dec 16 12:19:28 crc kubenswrapper[4805]: E1216 12:19:28.629899 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3\": container with ID starting with e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3 not found: ID does not exist" containerID="e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.630038 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3"} err="failed to get container status \"e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3\": rpc error: code = NotFound desc = could not find container \"e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3\": container with ID starting with e95855c94ad74684a60a96f90e0a11695e11b34db554ffcc7d8f2728395028e3 not found: ID does not exist" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.630285 4805 scope.go:117] "RemoveContainer" containerID="66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3" Dec 16 12:19:28 crc kubenswrapper[4805]: E1216 12:19:28.631216 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3\": container with ID starting with 66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3 not found: ID does not exist" containerID="66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.631313 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3"} err="failed to get container status \"66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3\": rpc error: code = NotFound desc = could not find container \"66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3\": container with ID starting with 66ccc4874bd8432eeb48c8dcb2256921e37268a237c578a8ca9f9138c85e75e3 not found: ID does not exist" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.631394 4805 scope.go:117] "RemoveContainer" containerID="1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05" Dec 16 12:19:28 crc kubenswrapper[4805]: E1216 12:19:28.632050 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05\": container with ID starting with 1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05 not found: ID does not exist" containerID="1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.632178 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05"} err="failed to get container status \"1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05\": rpc error: code = NotFound desc = could not find container \"1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05\": container with ID starting with 1f41b6f481394418f818c132b49aac7db96286e0c4bbbda241fbb9fe7ee35e05 not found: ID does not exist" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.640344 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0301416f-6de4-41bd-86e5-c6861d64ee07" (UID: "0301416f-6de4-41bd-86e5-c6861d64ee07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.696765 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-config-data" (OuterVolumeSpecName: "config-data") pod "0301416f-6de4-41bd-86e5-c6861d64ee07" (UID: "0301416f-6de4-41bd-86e5-c6861d64ee07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.720326 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.720358 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0301416f-6de4-41bd-86e5-c6861d64ee07-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.816418 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.834202 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.852742 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:28 crc kubenswrapper[4805]: E1216 12:19:28.853385 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="proxy-httpd" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.853419 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="proxy-httpd" Dec 16 12:19:28 crc kubenswrapper[4805]: E1216 12:19:28.853443 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="sg-core" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.853452 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="sg-core" Dec 16 12:19:28 crc kubenswrapper[4805]: E1216 12:19:28.853483 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="ceilometer-central-agent" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.853491 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="ceilometer-central-agent" Dec 16 12:19:28 crc kubenswrapper[4805]: E1216 12:19:28.853507 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="ceilometer-notification-agent" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.853514 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="ceilometer-notification-agent" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.853774 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="sg-core" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.853802 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="proxy-httpd" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.853822 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="ceilometer-notification-agent" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.853838 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" containerName="ceilometer-central-agent" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.856153 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.862014 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.862279 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.862441 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 16 12:19:28 crc kubenswrapper[4805]: I1216 12:19:28.863041 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.029596 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-run-httpd\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.029744 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.029790 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.029812 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-config-data\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.029842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-scripts\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.030008 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.030080 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qktx\" (UniqueName: \"kubernetes.io/projected/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-kube-api-access-9qktx\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.030110 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-log-httpd\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.132539 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-run-httpd\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.132633 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.132664 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.132678 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-config-data\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.132699 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-scripts\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.132719 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.132746 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qktx\" (UniqueName: \"kubernetes.io/projected/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-kube-api-access-9qktx\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.132765 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-log-httpd\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.132950 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-run-httpd\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.133015 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-log-httpd\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.140940 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.141083 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-scripts\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.143006 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.143870 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.155490 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-config-data\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.156017 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qktx\" (UniqueName: \"kubernetes.io/projected/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-kube-api-access-9qktx\") pod \"ceilometer-0\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.242977 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.522875 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbde2b12-518d-4b30-8fb6-85dd735c0a9c","Type":"ContainerStarted","Data":"687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f"} Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.522912 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbde2b12-518d-4b30-8fb6-85dd735c0a9c","Type":"ContainerStarted","Data":"c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a"} Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.552897 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.552873862 podStartE2EDuration="2.552873862s" podCreationTimestamp="2025-12-16 12:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:29.552264605 +0000 UTC m=+1443.270522400" watchObservedRunningTime="2025-12-16 12:19:29.552873862 +0000 UTC m=+1443.271131677" Dec 16 12:19:29 crc kubenswrapper[4805]: W1216 12:19:29.986391 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cf76587_1abd_4eba_b1e1_0d6bf44d7159.slice/crio-e07e5e4f737d5ed3df9b57034dafbf93a8d814942d5c3abdfa5b4942a06a4230 WatchSource:0}: Error finding container e07e5e4f737d5ed3df9b57034dafbf93a8d814942d5c3abdfa5b4942a06a4230: Status 404 returned error can't find the container with id e07e5e4f737d5ed3df9b57034dafbf93a8d814942d5c3abdfa5b4942a06a4230 Dec 16 12:19:29 crc kubenswrapper[4805]: I1216 12:19:29.999757 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:30 crc kubenswrapper[4805]: I1216 12:19:30.005330 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 16 12:19:30 crc kubenswrapper[4805]: I1216 12:19:30.006369 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 16 12:19:30 crc kubenswrapper[4805]: I1216 12:19:30.051882 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 16 12:19:30 crc kubenswrapper[4805]: I1216 12:19:30.543798 4805 generic.go:334] "Generic (PLEG): container finished" podID="6f06d9a3-948e-4e23-9ff0-e933b67ecb5f" containerID="f39f20c711d9cf1226294beb703afab43635a50ca800b65894720531a4a46750" exitCode=0 Dec 16 12:19:30 crc kubenswrapper[4805]: I1216 12:19:30.545501 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0301416f-6de4-41bd-86e5-c6861d64ee07" path="/var/lib/kubelet/pods/0301416f-6de4-41bd-86e5-c6861d64ee07/volumes" Dec 16 12:19:30 crc kubenswrapper[4805]: I1216 12:19:30.546842 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8n694" event={"ID":"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f","Type":"ContainerDied","Data":"f39f20c711d9cf1226294beb703afab43635a50ca800b65894720531a4a46750"} Dec 16 12:19:30 crc kubenswrapper[4805]: I1216 12:19:30.547433 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cf76587-1abd-4eba-b1e1-0d6bf44d7159","Type":"ContainerStarted","Data":"e07e5e4f737d5ed3df9b57034dafbf93a8d814942d5c3abdfa5b4942a06a4230"} Dec 16 12:19:30 crc kubenswrapper[4805]: I1216 12:19:30.741892 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 16 12:19:31 crc kubenswrapper[4805]: I1216 12:19:31.661846 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cf76587-1abd-4eba-b1e1-0d6bf44d7159","Type":"ContainerStarted","Data":"2fb2ce272370d28a640bf8d99732d15c346c48c2fa04024ff9e636df99e376a6"} Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.059872 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.157271 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-scripts\") pod \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.157370 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-combined-ca-bundle\") pod \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.157543 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-config-data\") pod \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.157654 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25svp\" (UniqueName: \"kubernetes.io/projected/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-kube-api-access-25svp\") pod \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\" (UID: \"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f\") " Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.162801 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-kube-api-access-25svp" (OuterVolumeSpecName: "kube-api-access-25svp") pod "6f06d9a3-948e-4e23-9ff0-e933b67ecb5f" (UID: "6f06d9a3-948e-4e23-9ff0-e933b67ecb5f"). InnerVolumeSpecName "kube-api-access-25svp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.163593 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-scripts" (OuterVolumeSpecName: "scripts") pod "6f06d9a3-948e-4e23-9ff0-e933b67ecb5f" (UID: "6f06d9a3-948e-4e23-9ff0-e933b67ecb5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.188236 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f06d9a3-948e-4e23-9ff0-e933b67ecb5f" (UID: "6f06d9a3-948e-4e23-9ff0-e933b67ecb5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.190735 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-config-data" (OuterVolumeSpecName: "config-data") pod "6f06d9a3-948e-4e23-9ff0-e933b67ecb5f" (UID: "6f06d9a3-948e-4e23-9ff0-e933b67ecb5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.260349 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.260393 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25svp\" (UniqueName: \"kubernetes.io/projected/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-kube-api-access-25svp\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.260406 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.260419 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.695609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8n694" event={"ID":"6f06d9a3-948e-4e23-9ff0-e933b67ecb5f","Type":"ContainerDied","Data":"f2a26c0a5e4e11091b1d90f40b56d5241e66ed7ee27135de09764571ba0b6dc2"} Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.695987 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2a26c0a5e4e11091b1d90f40b56d5241e66ed7ee27135de09764571ba0b6dc2" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.696317 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8n694" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.702011 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cf76587-1abd-4eba-b1e1-0d6bf44d7159","Type":"ContainerStarted","Data":"e2689751f8b8684895481897393875c2be4a4df62bf6f8d2b56b193e795d2ad0"} Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.796300 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 16 12:19:32 crc kubenswrapper[4805]: E1216 12:19:32.796960 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f06d9a3-948e-4e23-9ff0-e933b67ecb5f" containerName="nova-cell1-conductor-db-sync" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.796988 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f06d9a3-948e-4e23-9ff0-e933b67ecb5f" containerName="nova-cell1-conductor-db-sync" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.797294 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f06d9a3-948e-4e23-9ff0-e933b67ecb5f" containerName="nova-cell1-conductor-db-sync" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.798411 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.803037 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.826973 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.874370 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c8741c-c8da-42f0-9ef8-9e419b58dcf4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a1c8741c-c8da-42f0-9ef8-9e419b58dcf4\") " pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.874466 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c8741c-c8da-42f0-9ef8-9e419b58dcf4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a1c8741c-c8da-42f0-9ef8-9e419b58dcf4\") " pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.874503 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585dg\" (UniqueName: \"kubernetes.io/projected/a1c8741c-c8da-42f0-9ef8-9e419b58dcf4-kube-api-access-585dg\") pod \"nova-cell1-conductor-0\" (UID: \"a1c8741c-c8da-42f0-9ef8-9e419b58dcf4\") " pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.976169 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c8741c-c8da-42f0-9ef8-9e419b58dcf4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a1c8741c-c8da-42f0-9ef8-9e419b58dcf4\") " pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.976240 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c8741c-c8da-42f0-9ef8-9e419b58dcf4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a1c8741c-c8da-42f0-9ef8-9e419b58dcf4\") " pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.976269 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-585dg\" (UniqueName: \"kubernetes.io/projected/a1c8741c-c8da-42f0-9ef8-9e419b58dcf4-kube-api-access-585dg\") pod \"nova-cell1-conductor-0\" (UID: \"a1c8741c-c8da-42f0-9ef8-9e419b58dcf4\") " pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.982807 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c8741c-c8da-42f0-9ef8-9e419b58dcf4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a1c8741c-c8da-42f0-9ef8-9e419b58dcf4\") " pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.989494 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c8741c-c8da-42f0-9ef8-9e419b58dcf4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a1c8741c-c8da-42f0-9ef8-9e419b58dcf4\") " pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:32 crc kubenswrapper[4805]: I1216 12:19:32.994395 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-585dg\" (UniqueName: \"kubernetes.io/projected/a1c8741c-c8da-42f0-9ef8-9e419b58dcf4-kube-api-access-585dg\") pod \"nova-cell1-conductor-0\" (UID: \"a1c8741c-c8da-42f0-9ef8-9e419b58dcf4\") " pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:33 crc kubenswrapper[4805]: I1216 12:19:33.155930 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:33 crc kubenswrapper[4805]: I1216 12:19:33.714436 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cf76587-1abd-4eba-b1e1-0d6bf44d7159","Type":"ContainerStarted","Data":"c4d6c8714deb3e7c25e1b8bc67c28d8ef8b4df6c4e24176047dde4336a17759b"} Dec 16 12:19:33 crc kubenswrapper[4805]: I1216 12:19:33.850949 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 16 12:19:34 crc kubenswrapper[4805]: I1216 12:19:34.910323 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a1c8741c-c8da-42f0-9ef8-9e419b58dcf4","Type":"ContainerStarted","Data":"a6455bbbdb01152adbe419005c1ecf74b3f19aa7c1a59951b1e60a49a864a3b7"} Dec 16 12:19:34 crc kubenswrapper[4805]: I1216 12:19:34.910655 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a1c8741c-c8da-42f0-9ef8-9e419b58dcf4","Type":"ContainerStarted","Data":"2ae4029156dc72fab169ede1ae1224b71403689587b0c4c5e6a526172d005853"} Dec 16 12:19:34 crc kubenswrapper[4805]: I1216 12:19:34.911908 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:34 crc kubenswrapper[4805]: I1216 12:19:34.949606 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.949584389 podStartE2EDuration="2.949584389s" podCreationTimestamp="2025-12-16 12:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:34.947939571 +0000 UTC m=+1448.666197386" watchObservedRunningTime="2025-12-16 12:19:34.949584389 +0000 UTC m=+1448.667842204" Dec 16 12:19:35 crc kubenswrapper[4805]: I1216 12:19:35.005832 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 16 12:19:35 crc kubenswrapper[4805]: I1216 12:19:35.005881 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 16 12:19:35 crc kubenswrapper[4805]: I1216 12:19:35.052024 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 16 12:19:35 crc kubenswrapper[4805]: I1216 12:19:35.083981 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 16 12:19:35 crc kubenswrapper[4805]: I1216 12:19:35.997235 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 16 12:19:36 crc kubenswrapper[4805]: I1216 12:19:36.020467 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:19:36 crc kubenswrapper[4805]: I1216 12:19:36.020498 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:19:36 crc kubenswrapper[4805]: I1216 12:19:36.944514 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cf76587-1abd-4eba-b1e1-0d6bf44d7159","Type":"ContainerStarted","Data":"ef9da7926187c5b7544ade1d9bbfb328f1fe1c649115c154514036be6ef84b1d"} Dec 16 12:19:36 crc kubenswrapper[4805]: I1216 12:19:36.945099 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 16 12:19:37 crc kubenswrapper[4805]: I1216 12:19:37.000086 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.226968237 podStartE2EDuration="9.000062584s" podCreationTimestamp="2025-12-16 12:19:28 +0000 UTC" firstStartedPulling="2025-12-16 12:19:29.988976485 +0000 UTC m=+1443.707234290" lastFinishedPulling="2025-12-16 12:19:35.762070832 +0000 UTC m=+1449.480328637" observedRunningTime="2025-12-16 12:19:36.994451183 +0000 UTC m=+1450.712709008" watchObservedRunningTime="2025-12-16 12:19:37.000062584 +0000 UTC m=+1450.718320389" Dec 16 12:19:37 crc kubenswrapper[4805]: I1216 12:19:37.891676 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 16 12:19:37 crc kubenswrapper[4805]: I1216 12:19:37.892003 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 16 12:19:38 crc kubenswrapper[4805]: I1216 12:19:38.973508 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:19:38 crc kubenswrapper[4805]: I1216 12:19:38.973888 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.018785 4805 generic.go:334] "Generic (PLEG): container finished" podID="78d52648-9036-4507-8f7a-0b1c7b2e51fc" containerID="4210058fc4020b555db97a4e614a35dbcb9525c26749ee760709d8f9c66c7218" exitCode=137 Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.019306 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78d52648-9036-4507-8f7a-0b1c7b2e51fc","Type":"ContainerDied","Data":"4210058fc4020b555db97a4e614a35dbcb9525c26749ee760709d8f9c66c7218"} Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.273815 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.584164 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.676795 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-combined-ca-bundle\") pod \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.676932 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltkxr\" (UniqueName: \"kubernetes.io/projected/78d52648-9036-4507-8f7a-0b1c7b2e51fc-kube-api-access-ltkxr\") pod \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.677031 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-config-data\") pod \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\" (UID: \"78d52648-9036-4507-8f7a-0b1c7b2e51fc\") " Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.682745 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d52648-9036-4507-8f7a-0b1c7b2e51fc-kube-api-access-ltkxr" (OuterVolumeSpecName: "kube-api-access-ltkxr") pod "78d52648-9036-4507-8f7a-0b1c7b2e51fc" (UID: "78d52648-9036-4507-8f7a-0b1c7b2e51fc"). InnerVolumeSpecName "kube-api-access-ltkxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.705379 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-config-data" (OuterVolumeSpecName: "config-data") pod "78d52648-9036-4507-8f7a-0b1c7b2e51fc" (UID: "78d52648-9036-4507-8f7a-0b1c7b2e51fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.718407 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78d52648-9036-4507-8f7a-0b1c7b2e51fc" (UID: "78d52648-9036-4507-8f7a-0b1c7b2e51fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.779125 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltkxr\" (UniqueName: \"kubernetes.io/projected/78d52648-9036-4507-8f7a-0b1c7b2e51fc-kube-api-access-ltkxr\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.779168 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:43 crc kubenswrapper[4805]: I1216 12:19:43.779178 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d52648-9036-4507-8f7a-0b1c7b2e51fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.037787 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78d52648-9036-4507-8f7a-0b1c7b2e51fc","Type":"ContainerDied","Data":"e43d47c66b7652fe69bfc1f684744307f9d620c0d13de85becab4a153738663e"} Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.038129 4805 scope.go:117] "RemoveContainer" containerID="4210058fc4020b555db97a4e614a35dbcb9525c26749ee760709d8f9c66c7218" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.038329 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.084235 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.104215 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.141345 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 16 12:19:44 crc kubenswrapper[4805]: E1216 12:19:44.143292 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d52648-9036-4507-8f7a-0b1c7b2e51fc" containerName="nova-cell1-novncproxy-novncproxy" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.143317 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d52648-9036-4507-8f7a-0b1c7b2e51fc" containerName="nova-cell1-novncproxy-novncproxy" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.148015 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d52648-9036-4507-8f7a-0b1c7b2e51fc" containerName="nova-cell1-novncproxy-novncproxy" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.149276 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.152914 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.155658 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.155926 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.166098 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.288668 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.288736 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.288925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.289000 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.289104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6k95\" (UniqueName: \"kubernetes.io/projected/67131b33-530e-49eb-9e82-cfbe1a05a5f9-kube-api-access-g6k95\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.391049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6k95\" (UniqueName: \"kubernetes.io/projected/67131b33-530e-49eb-9e82-cfbe1a05a5f9-kube-api-access-g6k95\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.391157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.391189 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.391250 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.391305 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.395589 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.395713 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.395927 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.398214 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67131b33-530e-49eb-9e82-cfbe1a05a5f9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.407343 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6k95\" (UniqueName: \"kubernetes.io/projected/67131b33-530e-49eb-9e82-cfbe1a05a5f9-kube-api-access-g6k95\") pod \"nova-cell1-novncproxy-0\" (UID: \"67131b33-530e-49eb-9e82-cfbe1a05a5f9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.486882 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:44 crc kubenswrapper[4805]: I1216 12:19:44.541742 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d52648-9036-4507-8f7a-0b1c7b2e51fc" path="/var/lib/kubelet/pods/78d52648-9036-4507-8f7a-0b1c7b2e51fc/volumes" Dec 16 12:19:45 crc kubenswrapper[4805]: I1216 12:19:45.011734 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 16 12:19:45 crc kubenswrapper[4805]: I1216 12:19:45.015505 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 16 12:19:45 crc kubenswrapper[4805]: I1216 12:19:45.040487 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 16 12:19:45 crc kubenswrapper[4805]: I1216 12:19:45.041744 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 16 12:19:45 crc kubenswrapper[4805]: I1216 12:19:45.066348 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 16 12:19:46 crc kubenswrapper[4805]: I1216 12:19:46.065195 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"67131b33-530e-49eb-9e82-cfbe1a05a5f9","Type":"ContainerStarted","Data":"6d62241804c53f72db544e3245f929695bdfa238f20b33c4220eeda775d01d36"} Dec 16 12:19:46 crc kubenswrapper[4805]: I1216 12:19:46.065796 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"67131b33-530e-49eb-9e82-cfbe1a05a5f9","Type":"ContainerStarted","Data":"4165cda5cbdf2f0deff98c63b62951ca6e4215cec9ee31d021e3a51ad667881f"} Dec 16 12:19:46 crc kubenswrapper[4805]: I1216 12:19:46.098934 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.098907926 podStartE2EDuration="2.098907926s" podCreationTimestamp="2025-12-16 12:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:46.081220139 +0000 UTC m=+1459.799477964" watchObservedRunningTime="2025-12-16 12:19:46.098907926 +0000 UTC m=+1459.817165751" Dec 16 12:19:47 crc kubenswrapper[4805]: I1216 12:19:47.907929 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 16 12:19:47 crc kubenswrapper[4805]: I1216 12:19:47.911448 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 16 12:19:47 crc kubenswrapper[4805]: I1216 12:19:47.915533 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 16 12:19:47 crc kubenswrapper[4805]: I1216 12:19:47.918920 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.082415 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.086470 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.495594 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-5ncmd"] Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.497294 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.556548 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-5ncmd"] Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.607394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.607467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-config\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.607510 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8sxn\" (UniqueName: \"kubernetes.io/projected/4bbdc89b-8d50-4756-83ae-eaeeaa419579-kube-api-access-r8sxn\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.607566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.607598 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.607721 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.709246 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8sxn\" (UniqueName: \"kubernetes.io/projected/4bbdc89b-8d50-4756-83ae-eaeeaa419579-kube-api-access-r8sxn\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.709331 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.709366 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.709446 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.709484 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.709529 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-config\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.710389 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-config\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.711637 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.712254 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.713987 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.714899 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.728990 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8sxn\" (UniqueName: \"kubernetes.io/projected/4bbdc89b-8d50-4756-83ae-eaeeaa419579-kube-api-access-r8sxn\") pod \"dnsmasq-dns-59cf4bdb65-5ncmd\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:48 crc kubenswrapper[4805]: I1216 12:19:48.846554 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:50 crc kubenswrapper[4805]: I1216 12:19:49.488044 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:50 crc kubenswrapper[4805]: I1216 12:19:50.921912 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-5ncmd"] Dec 16 12:19:50 crc kubenswrapper[4805]: W1216 12:19:50.929742 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bbdc89b_8d50_4756_83ae_eaeeaa419579.slice/crio-0754bf20fa20c0b4a9fcc8940a3210872b990ca3a99c19367da2b96779e44a81 WatchSource:0}: Error finding container 0754bf20fa20c0b4a9fcc8940a3210872b990ca3a99c19367da2b96779e44a81: Status 404 returned error can't find the container with id 0754bf20fa20c0b4a9fcc8940a3210872b990ca3a99c19367da2b96779e44a81 Dec 16 12:19:50 crc kubenswrapper[4805]: I1216 12:19:50.976481 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:50 crc kubenswrapper[4805]: I1216 12:19:50.978187 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="ceilometer-central-agent" containerID="cri-o://2fb2ce272370d28a640bf8d99732d15c346c48c2fa04024ff9e636df99e376a6" gracePeriod=30 Dec 16 12:19:50 crc kubenswrapper[4805]: I1216 12:19:50.978741 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="proxy-httpd" containerID="cri-o://ef9da7926187c5b7544ade1d9bbfb328f1fe1c649115c154514036be6ef84b1d" gracePeriod=30 Dec 16 12:19:50 crc kubenswrapper[4805]: I1216 12:19:50.978807 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="sg-core" containerID="cri-o://c4d6c8714deb3e7c25e1b8bc67c28d8ef8b4df6c4e24176047dde4336a17759b" gracePeriod=30 Dec 16 12:19:50 crc kubenswrapper[4805]: I1216 12:19:50.978850 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="ceilometer-notification-agent" containerID="cri-o://e2689751f8b8684895481897393875c2be4a4df62bf6f8d2b56b193e795d2ad0" gracePeriod=30 Dec 16 12:19:51 crc kubenswrapper[4805]: I1216 12:19:50.999379 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.198:3000/\": EOF" Dec 16 12:19:51 crc kubenswrapper[4805]: I1216 12:19:51.131091 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:51 crc kubenswrapper[4805]: I1216 12:19:51.185183 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerName="nova-api-log" containerID="cri-o://c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a" gracePeriod=30 Dec 16 12:19:51 crc kubenswrapper[4805]: I1216 12:19:51.185683 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" event={"ID":"4bbdc89b-8d50-4756-83ae-eaeeaa419579","Type":"ContainerStarted","Data":"0754bf20fa20c0b4a9fcc8940a3210872b990ca3a99c19367da2b96779e44a81"} Dec 16 12:19:51 crc kubenswrapper[4805]: I1216 12:19:51.185806 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerName="nova-api-api" containerID="cri-o://687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f" gracePeriod=30 Dec 16 12:19:52 crc kubenswrapper[4805]: I1216 12:19:52.204422 4805 generic.go:334] "Generic (PLEG): container finished" podID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerID="ef9da7926187c5b7544ade1d9bbfb328f1fe1c649115c154514036be6ef84b1d" exitCode=0 Dec 16 12:19:52 crc kubenswrapper[4805]: I1216 12:19:52.204474 4805 generic.go:334] "Generic (PLEG): container finished" podID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerID="c4d6c8714deb3e7c25e1b8bc67c28d8ef8b4df6c4e24176047dde4336a17759b" exitCode=2 Dec 16 12:19:52 crc kubenswrapper[4805]: I1216 12:19:52.204485 4805 generic.go:334] "Generic (PLEG): container finished" podID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerID="2fb2ce272370d28a640bf8d99732d15c346c48c2fa04024ff9e636df99e376a6" exitCode=0 Dec 16 12:19:52 crc kubenswrapper[4805]: I1216 12:19:52.204488 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cf76587-1abd-4eba-b1e1-0d6bf44d7159","Type":"ContainerDied","Data":"ef9da7926187c5b7544ade1d9bbfb328f1fe1c649115c154514036be6ef84b1d"} Dec 16 12:19:52 crc kubenswrapper[4805]: I1216 12:19:52.204535 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cf76587-1abd-4eba-b1e1-0d6bf44d7159","Type":"ContainerDied","Data":"c4d6c8714deb3e7c25e1b8bc67c28d8ef8b4df6c4e24176047dde4336a17759b"} Dec 16 12:19:52 crc kubenswrapper[4805]: I1216 12:19:52.204551 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cf76587-1abd-4eba-b1e1-0d6bf44d7159","Type":"ContainerDied","Data":"2fb2ce272370d28a640bf8d99732d15c346c48c2fa04024ff9e636df99e376a6"} Dec 16 12:19:52 crc kubenswrapper[4805]: I1216 12:19:52.206535 4805 generic.go:334] "Generic (PLEG): container finished" podID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerID="c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a" exitCode=143 Dec 16 12:19:52 crc kubenswrapper[4805]: I1216 12:19:52.206619 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbde2b12-518d-4b30-8fb6-85dd735c0a9c","Type":"ContainerDied","Data":"c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a"} Dec 16 12:19:52 crc kubenswrapper[4805]: I1216 12:19:52.208468 4805 generic.go:334] "Generic (PLEG): container finished" podID="4bbdc89b-8d50-4756-83ae-eaeeaa419579" containerID="ccf0006284aecc1fee2fbef45185e02937f14452739c6f4f8060745ee44b67d6" exitCode=0 Dec 16 12:19:52 crc kubenswrapper[4805]: I1216 12:19:52.208512 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" event={"ID":"4bbdc89b-8d50-4756-83ae-eaeeaa419579","Type":"ContainerDied","Data":"ccf0006284aecc1fee2fbef45185e02937f14452739c6f4f8060745ee44b67d6"} Dec 16 12:19:53 crc kubenswrapper[4805]: I1216 12:19:53.221397 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" event={"ID":"4bbdc89b-8d50-4756-83ae-eaeeaa419579","Type":"ContainerStarted","Data":"24da059aa451978d31b1b1e30b3cf8fa240a3c2f3a314cb1c3393312ee25a8ef"} Dec 16 12:19:53 crc kubenswrapper[4805]: I1216 12:19:53.221915 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:53 crc kubenswrapper[4805]: I1216 12:19:53.255044 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" podStartSLOduration=5.255020672 podStartE2EDuration="5.255020672s" podCreationTimestamp="2025-12-16 12:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:53.251659025 +0000 UTC m=+1466.969916830" watchObservedRunningTime="2025-12-16 12:19:53.255020672 +0000 UTC m=+1466.973278487" Dec 16 12:19:54 crc kubenswrapper[4805]: I1216 12:19:54.488239 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:54 crc kubenswrapper[4805]: I1216 12:19:54.518064 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:54 crc kubenswrapper[4805]: I1216 12:19:54.826464 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:19:54 crc kubenswrapper[4805]: I1216 12:19:54.982447 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf2s6\" (UniqueName: \"kubernetes.io/projected/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-kube-api-access-xf2s6\") pod \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " Dec 16 12:19:54 crc kubenswrapper[4805]: I1216 12:19:54.982634 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-logs\") pod \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " Dec 16 12:19:54 crc kubenswrapper[4805]: I1216 12:19:54.982683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-config-data\") pod \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " Dec 16 12:19:54 crc kubenswrapper[4805]: I1216 12:19:54.982744 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-combined-ca-bundle\") pod \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\" (UID: \"cbde2b12-518d-4b30-8fb6-85dd735c0a9c\") " Dec 16 12:19:54 crc kubenswrapper[4805]: I1216 12:19:54.983203 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-logs" (OuterVolumeSpecName: "logs") pod "cbde2b12-518d-4b30-8fb6-85dd735c0a9c" (UID: "cbde2b12-518d-4b30-8fb6-85dd735c0a9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:19:54 crc kubenswrapper[4805]: I1216 12:19:54.983391 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:54 crc kubenswrapper[4805]: I1216 12:19:54.988419 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-kube-api-access-xf2s6" (OuterVolumeSpecName: "kube-api-access-xf2s6") pod "cbde2b12-518d-4b30-8fb6-85dd735c0a9c" (UID: "cbde2b12-518d-4b30-8fb6-85dd735c0a9c"). InnerVolumeSpecName "kube-api-access-xf2s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.025730 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-config-data" (OuterVolumeSpecName: "config-data") pod "cbde2b12-518d-4b30-8fb6-85dd735c0a9c" (UID: "cbde2b12-518d-4b30-8fb6-85dd735c0a9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.035409 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbde2b12-518d-4b30-8fb6-85dd735c0a9c" (UID: "cbde2b12-518d-4b30-8fb6-85dd735c0a9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.084856 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf2s6\" (UniqueName: \"kubernetes.io/projected/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-kube-api-access-xf2s6\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.084889 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.084900 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbde2b12-518d-4b30-8fb6-85dd735c0a9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.244070 4805 generic.go:334] "Generic (PLEG): container finished" podID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerID="687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f" exitCode=0 Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.244130 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.244175 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbde2b12-518d-4b30-8fb6-85dd735c0a9c","Type":"ContainerDied","Data":"687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f"} Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.244788 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cbde2b12-518d-4b30-8fb6-85dd735c0a9c","Type":"ContainerDied","Data":"2d26f57e11d4142cbb09bd8f46de18e57b6d65de1a5117069da1af7d70b65db4"} Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.244816 4805 scope.go:117] "RemoveContainer" containerID="687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.267297 4805 scope.go:117] "RemoveContainer" containerID="c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.267584 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.283908 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.288293 4805 scope.go:117] "RemoveContainer" containerID="687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f" Dec 16 12:19:55 crc kubenswrapper[4805]: E1216 12:19:55.289237 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f\": container with ID starting with 687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f not found: ID does not exist" containerID="687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.289290 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f"} err="failed to get container status \"687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f\": rpc error: code = NotFound desc = could not find container \"687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f\": container with ID starting with 687f851a702a4d816d47463bb6879c1d12976ce1afc7e81a7c38e7a74d97ea6f not found: ID does not exist" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.289328 4805 scope.go:117] "RemoveContainer" containerID="c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a" Dec 16 12:19:55 crc kubenswrapper[4805]: E1216 12:19:55.290107 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a\": container with ID starting with c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a not found: ID does not exist" containerID="c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.290125 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a"} err="failed to get container status \"c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a\": rpc error: code = NotFound desc = could not find container \"c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a\": container with ID starting with c0acc2390c03acf5b1f47be1b9245d7bfcf43cba07b83f0ad4e97575873ece1a not found: ID does not exist" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.293777 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.327556 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:55 crc kubenswrapper[4805]: E1216 12:19:55.328132 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerName="nova-api-api" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.328173 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerName="nova-api-api" Dec 16 12:19:55 crc kubenswrapper[4805]: E1216 12:19:55.328215 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerName="nova-api-log" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.328225 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerName="nova-api-log" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.328451 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerName="nova-api-api" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.328489 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" containerName="nova-api-log" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.329897 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.333537 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.333564 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.341218 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.363815 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.484646 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-h574q"] Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.486218 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.489940 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.490163 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.493875 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-public-tls-certs\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.494027 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45414083-33e0-4328-9064-68d63adecbeb-logs\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.494058 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.494105 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-config-data\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.494276 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lxk5\" (UniqueName: \"kubernetes.io/projected/45414083-33e0-4328-9064-68d63adecbeb-kube-api-access-6lxk5\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.494389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.511064 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-h574q"] Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.596112 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-public-tls-certs\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.596406 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45414083-33e0-4328-9064-68d63adecbeb-logs\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.596487 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.596561 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-config-data\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.596737 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-config-data\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.596816 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-scripts\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.596997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lxk5\" (UniqueName: \"kubernetes.io/projected/45414083-33e0-4328-9064-68d63adecbeb-kube-api-access-6lxk5\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.597038 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.597112 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvggn\" (UniqueName: \"kubernetes.io/projected/8213d94b-5cc7-407e-aedf-298f52f52198-kube-api-access-xvggn\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.597183 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.597489 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45414083-33e0-4328-9064-68d63adecbeb-logs\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.601644 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-config-data\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.609802 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.610477 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.612187 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-public-tls-certs\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.616128 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lxk5\" (UniqueName: \"kubernetes.io/projected/45414083-33e0-4328-9064-68d63adecbeb-kube-api-access-6lxk5\") pod \"nova-api-0\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.662201 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.698926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-scripts\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.699178 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.699288 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvggn\" (UniqueName: \"kubernetes.io/projected/8213d94b-5cc7-407e-aedf-298f52f52198-kube-api-access-xvggn\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.699503 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-config-data\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.705268 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-scripts\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.705301 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-config-data\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.724848 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.729367 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvggn\" (UniqueName: \"kubernetes.io/projected/8213d94b-5cc7-407e-aedf-298f52f52198-kube-api-access-xvggn\") pod \"nova-cell1-cell-mapping-h574q\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:55 crc kubenswrapper[4805]: I1216 12:19:55.804184 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:19:56 crc kubenswrapper[4805]: I1216 12:19:56.192647 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:19:56 crc kubenswrapper[4805]: W1216 12:19:56.205483 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45414083_33e0_4328_9064_68d63adecbeb.slice/crio-b540712da1e2fdf5d1940441f49fe49313d5ba78b234ea1be8044f30ef006d33 WatchSource:0}: Error finding container b540712da1e2fdf5d1940441f49fe49313d5ba78b234ea1be8044f30ef006d33: Status 404 returned error can't find the container with id b540712da1e2fdf5d1940441f49fe49313d5ba78b234ea1be8044f30ef006d33 Dec 16 12:19:56 crc kubenswrapper[4805]: I1216 12:19:56.256367 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45414083-33e0-4328-9064-68d63adecbeb","Type":"ContainerStarted","Data":"b540712da1e2fdf5d1940441f49fe49313d5ba78b234ea1be8044f30ef006d33"} Dec 16 12:19:56 crc kubenswrapper[4805]: I1216 12:19:56.338908 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-h574q"] Dec 16 12:19:56 crc kubenswrapper[4805]: W1216 12:19:56.360429 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8213d94b_5cc7_407e_aedf_298f52f52198.slice/crio-8e8601d928578e5aaca8ca207d7357b7119aa7f8d4256764fba4e6c7b95ef4ae WatchSource:0}: Error finding container 8e8601d928578e5aaca8ca207d7357b7119aa7f8d4256764fba4e6c7b95ef4ae: Status 404 returned error can't find the container with id 8e8601d928578e5aaca8ca207d7357b7119aa7f8d4256764fba4e6c7b95ef4ae Dec 16 12:19:56 crc kubenswrapper[4805]: I1216 12:19:56.534286 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbde2b12-518d-4b30-8fb6-85dd735c0a9c" path="/var/lib/kubelet/pods/cbde2b12-518d-4b30-8fb6-85dd735c0a9c/volumes" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.278057 4805 generic.go:334] "Generic (PLEG): container finished" podID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerID="e2689751f8b8684895481897393875c2be4a4df62bf6f8d2b56b193e795d2ad0" exitCode=0 Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.278291 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cf76587-1abd-4eba-b1e1-0d6bf44d7159","Type":"ContainerDied","Data":"e2689751f8b8684895481897393875c2be4a4df62bf6f8d2b56b193e795d2ad0"} Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.283300 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45414083-33e0-4328-9064-68d63adecbeb","Type":"ContainerStarted","Data":"efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15"} Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.283340 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45414083-33e0-4328-9064-68d63adecbeb","Type":"ContainerStarted","Data":"e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4"} Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.284964 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h574q" event={"ID":"8213d94b-5cc7-407e-aedf-298f52f52198","Type":"ContainerStarted","Data":"9bfca712ca63c20fa5d088250ff1b0f5b3ee8b603e1647e152a27f782cce0573"} Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.285001 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h574q" event={"ID":"8213d94b-5cc7-407e-aedf-298f52f52198","Type":"ContainerStarted","Data":"8e8601d928578e5aaca8ca207d7357b7119aa7f8d4256764fba4e6c7b95ef4ae"} Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.354194 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3541730100000002 podStartE2EDuration="2.35417301s" podCreationTimestamp="2025-12-16 12:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:57.323806159 +0000 UTC m=+1471.042063974" watchObservedRunningTime="2025-12-16 12:19:57.35417301 +0000 UTC m=+1471.072430835" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.355861 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.374550 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-h574q" podStartSLOduration=2.374523683 podStartE2EDuration="2.374523683s" podCreationTimestamp="2025-12-16 12:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:19:57.34895707 +0000 UTC m=+1471.067214875" watchObservedRunningTime="2025-12-16 12:19:57.374523683 +0000 UTC m=+1471.092781498" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.453161 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-log-httpd\") pod \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.453219 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qktx\" (UniqueName: \"kubernetes.io/projected/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-kube-api-access-9qktx\") pod \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.453357 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-config-data\") pod \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.453400 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-combined-ca-bundle\") pod \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.453431 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-ceilometer-tls-certs\") pod \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.453448 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-run-httpd\") pod \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.453465 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-sg-core-conf-yaml\") pod \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.453529 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-scripts\") pod \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\" (UID: \"2cf76587-1abd-4eba-b1e1-0d6bf44d7159\") " Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.453558 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2cf76587-1abd-4eba-b1e1-0d6bf44d7159" (UID: "2cf76587-1abd-4eba-b1e1-0d6bf44d7159"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.453929 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.454178 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2cf76587-1abd-4eba-b1e1-0d6bf44d7159" (UID: "2cf76587-1abd-4eba-b1e1-0d6bf44d7159"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.458263 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-scripts" (OuterVolumeSpecName: "scripts") pod "2cf76587-1abd-4eba-b1e1-0d6bf44d7159" (UID: "2cf76587-1abd-4eba-b1e1-0d6bf44d7159"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.458461 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-kube-api-access-9qktx" (OuterVolumeSpecName: "kube-api-access-9qktx") pod "2cf76587-1abd-4eba-b1e1-0d6bf44d7159" (UID: "2cf76587-1abd-4eba-b1e1-0d6bf44d7159"). InnerVolumeSpecName "kube-api-access-9qktx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.484190 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2cf76587-1abd-4eba-b1e1-0d6bf44d7159" (UID: "2cf76587-1abd-4eba-b1e1-0d6bf44d7159"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.522488 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2cf76587-1abd-4eba-b1e1-0d6bf44d7159" (UID: "2cf76587-1abd-4eba-b1e1-0d6bf44d7159"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.557267 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.557303 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.557315 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.558388 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.558414 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qktx\" (UniqueName: \"kubernetes.io/projected/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-kube-api-access-9qktx\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.596419 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cf76587-1abd-4eba-b1e1-0d6bf44d7159" (UID: "2cf76587-1abd-4eba-b1e1-0d6bf44d7159"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.642864 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-config-data" (OuterVolumeSpecName: "config-data") pod "2cf76587-1abd-4eba-b1e1-0d6bf44d7159" (UID: "2cf76587-1abd-4eba-b1e1-0d6bf44d7159"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.660011 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:57 crc kubenswrapper[4805]: I1216 12:19:57.660044 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf76587-1abd-4eba-b1e1-0d6bf44d7159-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.296579 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2cf76587-1abd-4eba-b1e1-0d6bf44d7159","Type":"ContainerDied","Data":"e07e5e4f737d5ed3df9b57034dafbf93a8d814942d5c3abdfa5b4942a06a4230"} Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.296952 4805 scope.go:117] "RemoveContainer" containerID="ef9da7926187c5b7544ade1d9bbfb328f1fe1c649115c154514036be6ef84b1d" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.296759 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.322010 4805 scope.go:117] "RemoveContainer" containerID="c4d6c8714deb3e7c25e1b8bc67c28d8ef8b4df6c4e24176047dde4336a17759b" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.355892 4805 scope.go:117] "RemoveContainer" containerID="e2689751f8b8684895481897393875c2be4a4df62bf6f8d2b56b193e795d2ad0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.356519 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.377599 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.387181 4805 scope.go:117] "RemoveContainer" containerID="2fb2ce272370d28a640bf8d99732d15c346c48c2fa04024ff9e636df99e376a6" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.389774 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:58 crc kubenswrapper[4805]: E1216 12:19:58.390401 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="sg-core" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.390421 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="sg-core" Dec 16 12:19:58 crc kubenswrapper[4805]: E1216 12:19:58.390434 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="ceilometer-central-agent" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.390441 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="ceilometer-central-agent" Dec 16 12:19:58 crc kubenswrapper[4805]: E1216 12:19:58.390464 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="ceilometer-notification-agent" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.390471 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="ceilometer-notification-agent" Dec 16 12:19:58 crc kubenswrapper[4805]: E1216 12:19:58.390498 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="proxy-httpd" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.390505 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="proxy-httpd" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.390728 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="proxy-httpd" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.390751 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="ceilometer-central-agent" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.390763 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="sg-core" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.390776 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" containerName="ceilometer-notification-agent" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.394197 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.406941 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.407215 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.407395 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.418521 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.475374 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-scripts\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.475436 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-config-data\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.475464 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.475494 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csjnr\" (UniqueName: \"kubernetes.io/projected/129d76c2-c0d0-4b1f-8157-ecc2abae65be-kube-api-access-csjnr\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.475517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129d76c2-c0d0-4b1f-8157-ecc2abae65be-run-httpd\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.475583 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129d76c2-c0d0-4b1f-8157-ecc2abae65be-log-httpd\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.475679 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.475707 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.534847 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf76587-1abd-4eba-b1e1-0d6bf44d7159" path="/var/lib/kubelet/pods/2cf76587-1abd-4eba-b1e1-0d6bf44d7159/volumes" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.578635 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.578677 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.578788 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-scripts\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.578831 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-config-data\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.579044 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.579101 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csjnr\" (UniqueName: \"kubernetes.io/projected/129d76c2-c0d0-4b1f-8157-ecc2abae65be-kube-api-access-csjnr\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.579151 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129d76c2-c0d0-4b1f-8157-ecc2abae65be-run-httpd\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.579209 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129d76c2-c0d0-4b1f-8157-ecc2abae65be-log-httpd\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.579554 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129d76c2-c0d0-4b1f-8157-ecc2abae65be-log-httpd\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.580104 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/129d76c2-c0d0-4b1f-8157-ecc2abae65be-run-httpd\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.582295 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.582639 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-config-data\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.584250 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-scripts\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.595890 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.596701 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/129d76c2-c0d0-4b1f-8157-ecc2abae65be-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.605952 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csjnr\" (UniqueName: \"kubernetes.io/projected/129d76c2-c0d0-4b1f-8157-ecc2abae65be-kube-api-access-csjnr\") pod \"ceilometer-0\" (UID: \"129d76c2-c0d0-4b1f-8157-ecc2abae65be\") " pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.729446 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.875135 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.966567 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gdbhm"] Dec 16 12:19:58 crc kubenswrapper[4805]: I1216 12:19:58.966864 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" podUID="6f69017e-0c0e-403d-9ec9-685b8b565878" containerName="dnsmasq-dns" containerID="cri-o://3309d1ec37e1429a474629b549e46332acc4572d27ef99b3338c21799ac377ce" gracePeriod=10 Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.327490 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.363859 4805 generic.go:334] "Generic (PLEG): container finished" podID="6f69017e-0c0e-403d-9ec9-685b8b565878" containerID="3309d1ec37e1429a474629b549e46332acc4572d27ef99b3338c21799ac377ce" exitCode=0 Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.363970 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" event={"ID":"6f69017e-0c0e-403d-9ec9-685b8b565878","Type":"ContainerDied","Data":"3309d1ec37e1429a474629b549e46332acc4572d27ef99b3338c21799ac377ce"} Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.509300 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.609976 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-svc\") pod \"6f69017e-0c0e-403d-9ec9-685b8b565878\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.610135 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-swift-storage-0\") pod \"6f69017e-0c0e-403d-9ec9-685b8b565878\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.610279 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw47q\" (UniqueName: \"kubernetes.io/projected/6f69017e-0c0e-403d-9ec9-685b8b565878-kube-api-access-vw47q\") pod \"6f69017e-0c0e-403d-9ec9-685b8b565878\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.611647 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-nb\") pod \"6f69017e-0c0e-403d-9ec9-685b8b565878\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.611681 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-config\") pod \"6f69017e-0c0e-403d-9ec9-685b8b565878\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.611713 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-sb\") pod \"6f69017e-0c0e-403d-9ec9-685b8b565878\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.639273 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f69017e-0c0e-403d-9ec9-685b8b565878-kube-api-access-vw47q" (OuterVolumeSpecName: "kube-api-access-vw47q") pod "6f69017e-0c0e-403d-9ec9-685b8b565878" (UID: "6f69017e-0c0e-403d-9ec9-685b8b565878"). InnerVolumeSpecName "kube-api-access-vw47q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.672643 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f69017e-0c0e-403d-9ec9-685b8b565878" (UID: "6f69017e-0c0e-403d-9ec9-685b8b565878"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.707635 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-config" (OuterVolumeSpecName: "config") pod "6f69017e-0c0e-403d-9ec9-685b8b565878" (UID: "6f69017e-0c0e-403d-9ec9-685b8b565878"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.713246 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f69017e-0c0e-403d-9ec9-685b8b565878" (UID: "6f69017e-0c0e-403d-9ec9-685b8b565878"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.714049 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-swift-storage-0\") pod \"6f69017e-0c0e-403d-9ec9-685b8b565878\" (UID: \"6f69017e-0c0e-403d-9ec9-685b8b565878\") " Dec 16 12:19:59 crc kubenswrapper[4805]: W1216 12:19:59.714255 4805 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6f69017e-0c0e-403d-9ec9-685b8b565878/volumes/kubernetes.io~configmap/dns-swift-storage-0 Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.714284 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f69017e-0c0e-403d-9ec9-685b8b565878" (UID: "6f69017e-0c0e-403d-9ec9-685b8b565878"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.714577 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.714602 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw47q\" (UniqueName: \"kubernetes.io/projected/6f69017e-0c0e-403d-9ec9-685b8b565878-kube-api-access-vw47q\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.714618 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.714631 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.715779 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f69017e-0c0e-403d-9ec9-685b8b565878" (UID: "6f69017e-0c0e-403d-9ec9-685b8b565878"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.720500 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f69017e-0c0e-403d-9ec9-685b8b565878" (UID: "6f69017e-0c0e-403d-9ec9-685b8b565878"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.815902 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:19:59 crc kubenswrapper[4805]: I1216 12:19:59.815935 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f69017e-0c0e-403d-9ec9-685b8b565878-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:00 crc kubenswrapper[4805]: I1216 12:20:00.393275 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129d76c2-c0d0-4b1f-8157-ecc2abae65be","Type":"ContainerStarted","Data":"baa8726807fe94c638b79dde22110f47a51d4fc7a85baf9f7e341627eb130143"} Dec 16 12:20:00 crc kubenswrapper[4805]: I1216 12:20:00.393614 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129d76c2-c0d0-4b1f-8157-ecc2abae65be","Type":"ContainerStarted","Data":"d975b3b35f22d936727d72bd32c07ce15db593827ce7bad1d39d7a037020795e"} Dec 16 12:20:00 crc kubenswrapper[4805]: I1216 12:20:00.395996 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" event={"ID":"6f69017e-0c0e-403d-9ec9-685b8b565878","Type":"ContainerDied","Data":"4b7b1baa44cd4160f05c3c6bfc9b525d47505ca128d037b466943f4cf5482343"} Dec 16 12:20:00 crc kubenswrapper[4805]: I1216 12:20:00.396051 4805 scope.go:117] "RemoveContainer" containerID="3309d1ec37e1429a474629b549e46332acc4572d27ef99b3338c21799ac377ce" Dec 16 12:20:00 crc kubenswrapper[4805]: I1216 12:20:00.396416 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-gdbhm" Dec 16 12:20:00 crc kubenswrapper[4805]: I1216 12:20:00.437494 4805 scope.go:117] "RemoveContainer" containerID="1fa6ac65aea2d3d8b6db6d8d69f2330e983f0c6c685ea35225165a8353004df6" Dec 16 12:20:00 crc kubenswrapper[4805]: I1216 12:20:00.451473 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gdbhm"] Dec 16 12:20:00 crc kubenswrapper[4805]: I1216 12:20:00.460871 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-gdbhm"] Dec 16 12:20:00 crc kubenswrapper[4805]: I1216 12:20:00.537584 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f69017e-0c0e-403d-9ec9-685b8b565878" path="/var/lib/kubelet/pods/6f69017e-0c0e-403d-9ec9-685b8b565878/volumes" Dec 16 12:20:01 crc kubenswrapper[4805]: I1216 12:20:01.407501 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129d76c2-c0d0-4b1f-8157-ecc2abae65be","Type":"ContainerStarted","Data":"b2d327031cfa8f862ac1c5112a1a989ef7366dc264de67ebfa66085f2fdf2326"} Dec 16 12:20:03 crc kubenswrapper[4805]: I1216 12:20:03.435552 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129d76c2-c0d0-4b1f-8157-ecc2abae65be","Type":"ContainerStarted","Data":"2c484c18dfeb1a0e18e95a1c5e21eee3953aaa83b0d12aa2f9ef347e4d857503"} Dec 16 12:20:03 crc kubenswrapper[4805]: I1216 12:20:03.437941 4805 generic.go:334] "Generic (PLEG): container finished" podID="8213d94b-5cc7-407e-aedf-298f52f52198" containerID="9bfca712ca63c20fa5d088250ff1b0f5b3ee8b603e1647e152a27f782cce0573" exitCode=0 Dec 16 12:20:03 crc kubenswrapper[4805]: I1216 12:20:03.437993 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h574q" event={"ID":"8213d94b-5cc7-407e-aedf-298f52f52198","Type":"ContainerDied","Data":"9bfca712ca63c20fa5d088250ff1b0f5b3ee8b603e1647e152a27f782cce0573"} Dec 16 12:20:04 crc kubenswrapper[4805]: I1216 12:20:04.459040 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"129d76c2-c0d0-4b1f-8157-ecc2abae65be","Type":"ContainerStarted","Data":"1e09c587bab161ba9f0523e9c43966d09dac3a7f52197fd4a7ad0209aebdc05d"} Dec 16 12:20:04 crc kubenswrapper[4805]: I1216 12:20:04.459449 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 16 12:20:04 crc kubenswrapper[4805]: I1216 12:20:04.505395 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.906489882 podStartE2EDuration="6.505372706s" podCreationTimestamp="2025-12-16 12:19:58 +0000 UTC" firstStartedPulling="2025-12-16 12:19:59.343162702 +0000 UTC m=+1473.061420507" lastFinishedPulling="2025-12-16 12:20:03.942045526 +0000 UTC m=+1477.660303331" observedRunningTime="2025-12-16 12:20:04.487769041 +0000 UTC m=+1478.206026866" watchObservedRunningTime="2025-12-16 12:20:04.505372706 +0000 UTC m=+1478.223630521" Dec 16 12:20:04 crc kubenswrapper[4805]: I1216 12:20:04.888828 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:20:04 crc kubenswrapper[4805]: I1216 12:20:04.938109 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-combined-ca-bundle\") pod \"8213d94b-5cc7-407e-aedf-298f52f52198\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " Dec 16 12:20:04 crc kubenswrapper[4805]: I1216 12:20:04.938726 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-scripts\") pod \"8213d94b-5cc7-407e-aedf-298f52f52198\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " Dec 16 12:20:04 crc kubenswrapper[4805]: I1216 12:20:04.938900 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvggn\" (UniqueName: \"kubernetes.io/projected/8213d94b-5cc7-407e-aedf-298f52f52198-kube-api-access-xvggn\") pod \"8213d94b-5cc7-407e-aedf-298f52f52198\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " Dec 16 12:20:04 crc kubenswrapper[4805]: I1216 12:20:04.939012 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-config-data\") pod \"8213d94b-5cc7-407e-aedf-298f52f52198\" (UID: \"8213d94b-5cc7-407e-aedf-298f52f52198\") " Dec 16 12:20:04 crc kubenswrapper[4805]: I1216 12:20:04.947043 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8213d94b-5cc7-407e-aedf-298f52f52198-kube-api-access-xvggn" (OuterVolumeSpecName: "kube-api-access-xvggn") pod "8213d94b-5cc7-407e-aedf-298f52f52198" (UID: "8213d94b-5cc7-407e-aedf-298f52f52198"). InnerVolumeSpecName "kube-api-access-xvggn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:20:04 crc kubenswrapper[4805]: I1216 12:20:04.963750 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-scripts" (OuterVolumeSpecName: "scripts") pod "8213d94b-5cc7-407e-aedf-298f52f52198" (UID: "8213d94b-5cc7-407e-aedf-298f52f52198"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:04 crc kubenswrapper[4805]: I1216 12:20:04.992324 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-config-data" (OuterVolumeSpecName: "config-data") pod "8213d94b-5cc7-407e-aedf-298f52f52198" (UID: "8213d94b-5cc7-407e-aedf-298f52f52198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.029015 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8213d94b-5cc7-407e-aedf-298f52f52198" (UID: "8213d94b-5cc7-407e-aedf-298f52f52198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.041942 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.041976 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.041991 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8213d94b-5cc7-407e-aedf-298f52f52198-scripts\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.042002 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvggn\" (UniqueName: \"kubernetes.io/projected/8213d94b-5cc7-407e-aedf-298f52f52198-kube-api-access-xvggn\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.469884 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h574q" Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.475337 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h574q" event={"ID":"8213d94b-5cc7-407e-aedf-298f52f52198","Type":"ContainerDied","Data":"8e8601d928578e5aaca8ca207d7357b7119aa7f8d4256764fba4e6c7b95ef4ae"} Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.475380 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8601d928578e5aaca8ca207d7357b7119aa7f8d4256764fba4e6c7b95ef4ae" Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.657987 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.658251 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45414083-33e0-4328-9064-68d63adecbeb" containerName="nova-api-log" containerID="cri-o://e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4" gracePeriod=30 Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.658664 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45414083-33e0-4328-9064-68d63adecbeb" containerName="nova-api-api" containerID="cri-o://efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15" gracePeriod=30 Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.673420 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.673769 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="81e50677-0733-4d96-97a5-6c2b12ecef0c" containerName="nova-scheduler-scheduler" containerID="cri-o://be95fd97bccd3e1e886dc276863dc8703f2b7e775aba0fb4cce7a45adf2df080" gracePeriod=30 Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.723996 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.724255 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerName="nova-metadata-log" containerID="cri-o://e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e" gracePeriod=30 Dec 16 12:20:05 crc kubenswrapper[4805]: I1216 12:20:05.724316 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerName="nova-metadata-metadata" containerID="cri-o://8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb" gracePeriod=30 Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.450753 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.474841 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-config-data\") pod \"45414083-33e0-4328-9064-68d63adecbeb\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.474957 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-combined-ca-bundle\") pod \"45414083-33e0-4328-9064-68d63adecbeb\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.475742 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-public-tls-certs\") pod \"45414083-33e0-4328-9064-68d63adecbeb\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.475812 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45414083-33e0-4328-9064-68d63adecbeb-logs\") pod \"45414083-33e0-4328-9064-68d63adecbeb\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.475911 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-internal-tls-certs\") pod \"45414083-33e0-4328-9064-68d63adecbeb\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.475993 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lxk5\" (UniqueName: \"kubernetes.io/projected/45414083-33e0-4328-9064-68d63adecbeb-kube-api-access-6lxk5\") pod \"45414083-33e0-4328-9064-68d63adecbeb\" (UID: \"45414083-33e0-4328-9064-68d63adecbeb\") " Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.477247 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45414083-33e0-4328-9064-68d63adecbeb-logs" (OuterVolumeSpecName: "logs") pod "45414083-33e0-4328-9064-68d63adecbeb" (UID: "45414083-33e0-4328-9064-68d63adecbeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.485280 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45414083-33e0-4328-9064-68d63adecbeb-kube-api-access-6lxk5" (OuterVolumeSpecName: "kube-api-access-6lxk5") pod "45414083-33e0-4328-9064-68d63adecbeb" (UID: "45414083-33e0-4328-9064-68d63adecbeb"). InnerVolumeSpecName "kube-api-access-6lxk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.498395 4805 generic.go:334] "Generic (PLEG): container finished" podID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerID="e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e" exitCode=143 Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.498741 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cf5e17d-9fef-467a-97a6-6a02b7922808","Type":"ContainerDied","Data":"e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e"} Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.501586 4805 generic.go:334] "Generic (PLEG): container finished" podID="45414083-33e0-4328-9064-68d63adecbeb" containerID="efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15" exitCode=0 Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.501855 4805 generic.go:334] "Generic (PLEG): container finished" podID="45414083-33e0-4328-9064-68d63adecbeb" containerID="e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4" exitCode=143 Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.503090 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.503834 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45414083-33e0-4328-9064-68d63adecbeb","Type":"ContainerDied","Data":"efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15"} Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.503937 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45414083-33e0-4328-9064-68d63adecbeb","Type":"ContainerDied","Data":"e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4"} Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.504016 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45414083-33e0-4328-9064-68d63adecbeb","Type":"ContainerDied","Data":"b540712da1e2fdf5d1940441f49fe49313d5ba78b234ea1be8044f30ef006d33"} Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.504100 4805 scope.go:117] "RemoveContainer" containerID="efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.518836 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-config-data" (OuterVolumeSpecName: "config-data") pod "45414083-33e0-4328-9064-68d63adecbeb" (UID: "45414083-33e0-4328-9064-68d63adecbeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.543877 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45414083-33e0-4328-9064-68d63adecbeb" (UID: "45414083-33e0-4328-9064-68d63adecbeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.563809 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "45414083-33e0-4328-9064-68d63adecbeb" (UID: "45414083-33e0-4328-9064-68d63adecbeb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.578666 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.580939 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.580982 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45414083-33e0-4328-9064-68d63adecbeb-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.580993 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lxk5\" (UniqueName: \"kubernetes.io/projected/45414083-33e0-4328-9064-68d63adecbeb-kube-api-access-6lxk5\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.581003 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.591455 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "45414083-33e0-4328-9064-68d63adecbeb" (UID: "45414083-33e0-4328-9064-68d63adecbeb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.611649 4805 scope.go:117] "RemoveContainer" containerID="e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.638956 4805 scope.go:117] "RemoveContainer" containerID="efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15" Dec 16 12:20:06 crc kubenswrapper[4805]: E1216 12:20:06.639597 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15\": container with ID starting with efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15 not found: ID does not exist" containerID="efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.639638 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15"} err="failed to get container status \"efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15\": rpc error: code = NotFound desc = could not find container \"efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15\": container with ID starting with efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15 not found: ID does not exist" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.639663 4805 scope.go:117] "RemoveContainer" containerID="e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4" Dec 16 12:20:06 crc kubenswrapper[4805]: E1216 12:20:06.640124 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4\": container with ID starting with e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4 not found: ID does not exist" containerID="e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.640204 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4"} err="failed to get container status \"e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4\": rpc error: code = NotFound desc = could not find container \"e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4\": container with ID starting with e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4 not found: ID does not exist" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.640222 4805 scope.go:117] "RemoveContainer" containerID="efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.640482 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15"} err="failed to get container status \"efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15\": rpc error: code = NotFound desc = could not find container \"efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15\": container with ID starting with efd38a18f92fb5bdbed4b08707a44a3c2d76d5f65c091a6791109276e1efea15 not found: ID does not exist" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.640504 4805 scope.go:117] "RemoveContainer" containerID="e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.640784 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4"} err="failed to get container status \"e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4\": rpc error: code = NotFound desc = could not find container \"e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4\": container with ID starting with e51cbb02a2b6e9354901bb086522de3861312a83df0dad51870c23e9267b8cd4 not found: ID does not exist" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.683260 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45414083-33e0-4328-9064-68d63adecbeb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.837303 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.889030 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.911590 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 16 12:20:06 crc kubenswrapper[4805]: E1216 12:20:06.912393 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f69017e-0c0e-403d-9ec9-685b8b565878" containerName="init" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.912457 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f69017e-0c0e-403d-9ec9-685b8b565878" containerName="init" Dec 16 12:20:06 crc kubenswrapper[4805]: E1216 12:20:06.912518 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8213d94b-5cc7-407e-aedf-298f52f52198" containerName="nova-manage" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.912565 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8213d94b-5cc7-407e-aedf-298f52f52198" containerName="nova-manage" Dec 16 12:20:06 crc kubenswrapper[4805]: E1216 12:20:06.912636 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f69017e-0c0e-403d-9ec9-685b8b565878" containerName="dnsmasq-dns" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.912684 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f69017e-0c0e-403d-9ec9-685b8b565878" containerName="dnsmasq-dns" Dec 16 12:20:06 crc kubenswrapper[4805]: E1216 12:20:06.912736 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45414083-33e0-4328-9064-68d63adecbeb" containerName="nova-api-api" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.912781 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="45414083-33e0-4328-9064-68d63adecbeb" containerName="nova-api-api" Dec 16 12:20:06 crc kubenswrapper[4805]: E1216 12:20:06.912843 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45414083-33e0-4328-9064-68d63adecbeb" containerName="nova-api-log" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.912888 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="45414083-33e0-4328-9064-68d63adecbeb" containerName="nova-api-log" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.913115 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="45414083-33e0-4328-9064-68d63adecbeb" containerName="nova-api-log" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.913202 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8213d94b-5cc7-407e-aedf-298f52f52198" containerName="nova-manage" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.913261 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f69017e-0c0e-403d-9ec9-685b8b565878" containerName="dnsmasq-dns" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.913311 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="45414083-33e0-4328-9064-68d63adecbeb" containerName="nova-api-api" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.914433 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.917486 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.917801 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.917986 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.924859 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.990256 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.990672 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31340444-d4f0-468b-9acb-ca27b87165a9-logs\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.990935 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4d66\" (UniqueName: \"kubernetes.io/projected/31340444-d4f0-468b-9acb-ca27b87165a9-kube-api-access-q4d66\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.991088 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.991269 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-config-data\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:06 crc kubenswrapper[4805]: I1216 12:20:06.991394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-public-tls-certs\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.093730 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.094024 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31340444-d4f0-468b-9acb-ca27b87165a9-logs\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.094175 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4d66\" (UniqueName: \"kubernetes.io/projected/31340444-d4f0-468b-9acb-ca27b87165a9-kube-api-access-q4d66\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.094291 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.094402 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-config-data\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.094543 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-public-tls-certs\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.096010 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31340444-d4f0-468b-9acb-ca27b87165a9-logs\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.099421 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-public-tls-certs\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.099926 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.102036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-config-data\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.104558 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31340444-d4f0-468b-9acb-ca27b87165a9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.116709 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4d66\" (UniqueName: \"kubernetes.io/projected/31340444-d4f0-468b-9acb-ca27b87165a9-kube-api-access-q4d66\") pod \"nova-api-0\" (UID: \"31340444-d4f0-468b-9acb-ca27b87165a9\") " pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.232602 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 16 12:20:07 crc kubenswrapper[4805]: I1216 12:20:07.816002 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 16 12:20:08 crc kubenswrapper[4805]: I1216 12:20:08.537346 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45414083-33e0-4328-9064-68d63adecbeb" path="/var/lib/kubelet/pods/45414083-33e0-4328-9064-68d63adecbeb/volumes" Dec 16 12:20:08 crc kubenswrapper[4805]: I1216 12:20:08.538765 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"31340444-d4f0-468b-9acb-ca27b87165a9","Type":"ContainerStarted","Data":"0c863d8d94dd36e1819ee7b4da700412fe08a6e2d65b9098b5dc23b9b05272ae"} Dec 16 12:20:08 crc kubenswrapper[4805]: I1216 12:20:08.538795 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"31340444-d4f0-468b-9acb-ca27b87165a9","Type":"ContainerStarted","Data":"3af62c92795371ff8fe4fee6cc6bb18599e1e717461d243320fb845dc4422143"} Dec 16 12:20:08 crc kubenswrapper[4805]: I1216 12:20:08.538805 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"31340444-d4f0-468b-9acb-ca27b87165a9","Type":"ContainerStarted","Data":"e820f5214ac7b46f2ef2f0bacabb50caf7138617c68a965d3cbaeb20b3d15708"} Dec 16 12:20:08 crc kubenswrapper[4805]: I1216 12:20:08.565797 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.565775802 podStartE2EDuration="2.565775802s" podCreationTimestamp="2025-12-16 12:20:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:20:08.558434422 +0000 UTC m=+1482.276692247" watchObservedRunningTime="2025-12-16 12:20:08.565775802 +0000 UTC m=+1482.284033617" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.335077 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.444308 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-nova-metadata-tls-certs\") pod \"8cf5e17d-9fef-467a-97a6-6a02b7922808\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.444616 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-combined-ca-bundle\") pod \"8cf5e17d-9fef-467a-97a6-6a02b7922808\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.444703 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwr9j\" (UniqueName: \"kubernetes.io/projected/8cf5e17d-9fef-467a-97a6-6a02b7922808-kube-api-access-jwr9j\") pod \"8cf5e17d-9fef-467a-97a6-6a02b7922808\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.444740 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf5e17d-9fef-467a-97a6-6a02b7922808-logs\") pod \"8cf5e17d-9fef-467a-97a6-6a02b7922808\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.444810 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-config-data\") pod \"8cf5e17d-9fef-467a-97a6-6a02b7922808\" (UID: \"8cf5e17d-9fef-467a-97a6-6a02b7922808\") " Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.446403 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cf5e17d-9fef-467a-97a6-6a02b7922808-logs" (OuterVolumeSpecName: "logs") pod "8cf5e17d-9fef-467a-97a6-6a02b7922808" (UID: "8cf5e17d-9fef-467a-97a6-6a02b7922808"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.451130 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf5e17d-9fef-467a-97a6-6a02b7922808-kube-api-access-jwr9j" (OuterVolumeSpecName: "kube-api-access-jwr9j") pod "8cf5e17d-9fef-467a-97a6-6a02b7922808" (UID: "8cf5e17d-9fef-467a-97a6-6a02b7922808"). InnerVolumeSpecName "kube-api-access-jwr9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.499679 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-config-data" (OuterVolumeSpecName: "config-data") pod "8cf5e17d-9fef-467a-97a6-6a02b7922808" (UID: "8cf5e17d-9fef-467a-97a6-6a02b7922808"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.512305 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cf5e17d-9fef-467a-97a6-6a02b7922808" (UID: "8cf5e17d-9fef-467a-97a6-6a02b7922808"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.517895 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8cf5e17d-9fef-467a-97a6-6a02b7922808" (UID: "8cf5e17d-9fef-467a-97a6-6a02b7922808"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.547703 4805 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.547740 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.547750 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwr9j\" (UniqueName: \"kubernetes.io/projected/8cf5e17d-9fef-467a-97a6-6a02b7922808-kube-api-access-jwr9j\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.547763 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cf5e17d-9fef-467a-97a6-6a02b7922808-logs\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.547773 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cf5e17d-9fef-467a-97a6-6a02b7922808-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.562889 4805 generic.go:334] "Generic (PLEG): container finished" podID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerID="8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb" exitCode=0 Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.563296 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.563340 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cf5e17d-9fef-467a-97a6-6a02b7922808","Type":"ContainerDied","Data":"8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb"} Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.564527 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cf5e17d-9fef-467a-97a6-6a02b7922808","Type":"ContainerDied","Data":"5c1314e0fbba5babeee16a7cb04fb18ca7c31ae30e2bd79237b4dfaed13726c2"} Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.564584 4805 scope.go:117] "RemoveContainer" containerID="8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.666038 4805 scope.go:117] "RemoveContainer" containerID="e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.695127 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.704474 4805 scope.go:117] "RemoveContainer" containerID="8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb" Dec 16 12:20:09 crc kubenswrapper[4805]: E1216 12:20:09.706126 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb\": container with ID starting with 8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb not found: ID does not exist" containerID="8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.706190 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb"} err="failed to get container status \"8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb\": rpc error: code = NotFound desc = could not find container \"8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb\": container with ID starting with 8dd5452ce7efcae57a34a5b077e3058df7c05521a46682bdbb9c1274f86799eb not found: ID does not exist" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.706220 4805 scope.go:117] "RemoveContainer" containerID="e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e" Dec 16 12:20:09 crc kubenswrapper[4805]: E1216 12:20:09.706559 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e\": container with ID starting with e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e not found: ID does not exist" containerID="e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.706647 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e"} err="failed to get container status \"e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e\": rpc error: code = NotFound desc = could not find container \"e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e\": container with ID starting with e7fa0c4c51c53c990a91ef90c2316a4ee41ab6826acacbd5acc9839b734ce24e not found: ID does not exist" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.710316 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.723062 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:20:09 crc kubenswrapper[4805]: E1216 12:20:09.723798 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerName="nova-metadata-metadata" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.723826 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerName="nova-metadata-metadata" Dec 16 12:20:09 crc kubenswrapper[4805]: E1216 12:20:09.723881 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerName="nova-metadata-log" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.723891 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerName="nova-metadata-log" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.724213 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerName="nova-metadata-metadata" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.724255 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf5e17d-9fef-467a-97a6-6a02b7922808" containerName="nova-metadata-log" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.725332 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.727563 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.728086 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.736807 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.766688 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkzjl\" (UniqueName: \"kubernetes.io/projected/ba40942d-c8ca-45da-b36d-7e447dac985e-kube-api-access-gkzjl\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.766748 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba40942d-c8ca-45da-b36d-7e447dac985e-logs\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.766803 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba40942d-c8ca-45da-b36d-7e447dac985e-config-data\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.767114 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba40942d-c8ca-45da-b36d-7e447dac985e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.767208 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba40942d-c8ca-45da-b36d-7e447dac985e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.869361 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkzjl\" (UniqueName: \"kubernetes.io/projected/ba40942d-c8ca-45da-b36d-7e447dac985e-kube-api-access-gkzjl\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.869643 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba40942d-c8ca-45da-b36d-7e447dac985e-logs\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.869775 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba40942d-c8ca-45da-b36d-7e447dac985e-config-data\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.870059 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba40942d-c8ca-45da-b36d-7e447dac985e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.870126 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba40942d-c8ca-45da-b36d-7e447dac985e-logs\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.870186 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba40942d-c8ca-45da-b36d-7e447dac985e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.873478 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba40942d-c8ca-45da-b36d-7e447dac985e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.873908 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba40942d-c8ca-45da-b36d-7e447dac985e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.873933 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba40942d-c8ca-45da-b36d-7e447dac985e-config-data\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:09 crc kubenswrapper[4805]: I1216 12:20:09.890287 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkzjl\" (UniqueName: \"kubernetes.io/projected/ba40942d-c8ca-45da-b36d-7e447dac985e-kube-api-access-gkzjl\") pod \"nova-metadata-0\" (UID: \"ba40942d-c8ca-45da-b36d-7e447dac985e\") " pod="openstack/nova-metadata-0" Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.050473 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 16 12:20:10 crc kubenswrapper[4805]: E1216 12:20:10.062698 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be95fd97bccd3e1e886dc276863dc8703f2b7e775aba0fb4cce7a45adf2df080" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 16 12:20:10 crc kubenswrapper[4805]: E1216 12:20:10.064635 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be95fd97bccd3e1e886dc276863dc8703f2b7e775aba0fb4cce7a45adf2df080" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 16 12:20:10 crc kubenswrapper[4805]: E1216 12:20:10.066489 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be95fd97bccd3e1e886dc276863dc8703f2b7e775aba0fb4cce7a45adf2df080" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 16 12:20:10 crc kubenswrapper[4805]: E1216 12:20:10.066606 4805 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="81e50677-0733-4d96-97a5-6c2b12ecef0c" containerName="nova-scheduler-scheduler" Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.539555 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf5e17d-9fef-467a-97a6-6a02b7922808" path="/var/lib/kubelet/pods/8cf5e17d-9fef-467a-97a6-6a02b7922808/volumes" Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.556006 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.580701 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba40942d-c8ca-45da-b36d-7e447dac985e","Type":"ContainerStarted","Data":"dea4479a746b3d5bd401f6107efa144779738467ca18409af87a620b9da6dc95"} Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.586762 4805 generic.go:334] "Generic (PLEG): container finished" podID="81e50677-0733-4d96-97a5-6c2b12ecef0c" containerID="be95fd97bccd3e1e886dc276863dc8703f2b7e775aba0fb4cce7a45adf2df080" exitCode=0 Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.586804 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81e50677-0733-4d96-97a5-6c2b12ecef0c","Type":"ContainerDied","Data":"be95fd97bccd3e1e886dc276863dc8703f2b7e775aba0fb4cce7a45adf2df080"} Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.810001 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.890747 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-config-data\") pod \"81e50677-0733-4d96-97a5-6c2b12ecef0c\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.890805 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtlx4\" (UniqueName: \"kubernetes.io/projected/81e50677-0733-4d96-97a5-6c2b12ecef0c-kube-api-access-dtlx4\") pod \"81e50677-0733-4d96-97a5-6c2b12ecef0c\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.891765 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-combined-ca-bundle\") pod \"81e50677-0733-4d96-97a5-6c2b12ecef0c\" (UID: \"81e50677-0733-4d96-97a5-6c2b12ecef0c\") " Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.898341 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e50677-0733-4d96-97a5-6c2b12ecef0c-kube-api-access-dtlx4" (OuterVolumeSpecName: "kube-api-access-dtlx4") pod "81e50677-0733-4d96-97a5-6c2b12ecef0c" (UID: "81e50677-0733-4d96-97a5-6c2b12ecef0c"). InnerVolumeSpecName "kube-api-access-dtlx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.936403 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-config-data" (OuterVolumeSpecName: "config-data") pod "81e50677-0733-4d96-97a5-6c2b12ecef0c" (UID: "81e50677-0733-4d96-97a5-6c2b12ecef0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.964367 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81e50677-0733-4d96-97a5-6c2b12ecef0c" (UID: "81e50677-0733-4d96-97a5-6c2b12ecef0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.994507 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.994802 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtlx4\" (UniqueName: \"kubernetes.io/projected/81e50677-0733-4d96-97a5-6c2b12ecef0c-kube-api-access-dtlx4\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:10 crc kubenswrapper[4805]: I1216 12:20:10.994819 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e50677-0733-4d96-97a5-6c2b12ecef0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.596783 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81e50677-0733-4d96-97a5-6c2b12ecef0c","Type":"ContainerDied","Data":"d281c79087136fda5ff3fe4d225529df2c4017ecc2747922f323c46fb7bf1c84"} Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.596837 4805 scope.go:117] "RemoveContainer" containerID="be95fd97bccd3e1e886dc276863dc8703f2b7e775aba0fb4cce7a45adf2df080" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.596942 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.601882 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba40942d-c8ca-45da-b36d-7e447dac985e","Type":"ContainerStarted","Data":"762683bbc4e596d2561419b89b590d0ececf4376273b9d21acb2063358a55b66"} Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.601932 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba40942d-c8ca-45da-b36d-7e447dac985e","Type":"ContainerStarted","Data":"40078f1037459c438a342e322393ab465440a5fc842aaad6d0f57309b6dd385c"} Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.640091 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.640067458 podStartE2EDuration="2.640067458s" podCreationTimestamp="2025-12-16 12:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:20:11.622965578 +0000 UTC m=+1485.341223393" watchObservedRunningTime="2025-12-16 12:20:11.640067458 +0000 UTC m=+1485.358325283" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.652931 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.679205 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.703588 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:20:11 crc kubenswrapper[4805]: E1216 12:20:11.704339 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81e50677-0733-4d96-97a5-6c2b12ecef0c" containerName="nova-scheduler-scheduler" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.704415 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="81e50677-0733-4d96-97a5-6c2b12ecef0c" containerName="nova-scheduler-scheduler" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.704692 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="81e50677-0733-4d96-97a5-6c2b12ecef0c" containerName="nova-scheduler-scheduler" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.705529 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.709851 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.714909 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.810880 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br7pq\" (UniqueName: \"kubernetes.io/projected/9ff906a7-6277-4d2e-b804-4d8e006cab7d-kube-api-access-br7pq\") pod \"nova-scheduler-0\" (UID: \"9ff906a7-6277-4d2e-b804-4d8e006cab7d\") " pod="openstack/nova-scheduler-0" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.811282 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff906a7-6277-4d2e-b804-4d8e006cab7d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ff906a7-6277-4d2e-b804-4d8e006cab7d\") " pod="openstack/nova-scheduler-0" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.811482 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff906a7-6277-4d2e-b804-4d8e006cab7d-config-data\") pod \"nova-scheduler-0\" (UID: \"9ff906a7-6277-4d2e-b804-4d8e006cab7d\") " pod="openstack/nova-scheduler-0" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.913860 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br7pq\" (UniqueName: \"kubernetes.io/projected/9ff906a7-6277-4d2e-b804-4d8e006cab7d-kube-api-access-br7pq\") pod \"nova-scheduler-0\" (UID: \"9ff906a7-6277-4d2e-b804-4d8e006cab7d\") " pod="openstack/nova-scheduler-0" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.914212 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff906a7-6277-4d2e-b804-4d8e006cab7d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ff906a7-6277-4d2e-b804-4d8e006cab7d\") " pod="openstack/nova-scheduler-0" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.914236 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff906a7-6277-4d2e-b804-4d8e006cab7d-config-data\") pod \"nova-scheduler-0\" (UID: \"9ff906a7-6277-4d2e-b804-4d8e006cab7d\") " pod="openstack/nova-scheduler-0" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.918504 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff906a7-6277-4d2e-b804-4d8e006cab7d-config-data\") pod \"nova-scheduler-0\" (UID: \"9ff906a7-6277-4d2e-b804-4d8e006cab7d\") " pod="openstack/nova-scheduler-0" Dec 16 12:20:11 crc kubenswrapper[4805]: I1216 12:20:11.918717 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff906a7-6277-4d2e-b804-4d8e006cab7d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ff906a7-6277-4d2e-b804-4d8e006cab7d\") " pod="openstack/nova-scheduler-0" Dec 16 12:20:12 crc kubenswrapper[4805]: I1216 12:20:12.031127 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br7pq\" (UniqueName: \"kubernetes.io/projected/9ff906a7-6277-4d2e-b804-4d8e006cab7d-kube-api-access-br7pq\") pod \"nova-scheduler-0\" (UID: \"9ff906a7-6277-4d2e-b804-4d8e006cab7d\") " pod="openstack/nova-scheduler-0" Dec 16 12:20:12 crc kubenswrapper[4805]: I1216 12:20:12.329189 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 16 12:20:12 crc kubenswrapper[4805]: I1216 12:20:12.552753 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e50677-0733-4d96-97a5-6c2b12ecef0c" path="/var/lib/kubelet/pods/81e50677-0733-4d96-97a5-6c2b12ecef0c/volumes" Dec 16 12:20:12 crc kubenswrapper[4805]: I1216 12:20:12.808504 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 16 12:20:12 crc kubenswrapper[4805]: W1216 12:20:12.809230 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ff906a7_6277_4d2e_b804_4d8e006cab7d.slice/crio-cb117b1cfa1b89ec0c51af17ea726cae105ff41e7264c16bb9af6e7d96f4fb55 WatchSource:0}: Error finding container cb117b1cfa1b89ec0c51af17ea726cae105ff41e7264c16bb9af6e7d96f4fb55: Status 404 returned error can't find the container with id cb117b1cfa1b89ec0c51af17ea726cae105ff41e7264c16bb9af6e7d96f4fb55 Dec 16 12:20:13 crc kubenswrapper[4805]: I1216 12:20:13.623289 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ff906a7-6277-4d2e-b804-4d8e006cab7d","Type":"ContainerStarted","Data":"8d2abf23c98e96fff0987694311f09216ad0e97b9b8b40f759ad72080a7a4331"} Dec 16 12:20:13 crc kubenswrapper[4805]: I1216 12:20:13.623949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ff906a7-6277-4d2e-b804-4d8e006cab7d","Type":"ContainerStarted","Data":"cb117b1cfa1b89ec0c51af17ea726cae105ff41e7264c16bb9af6e7d96f4fb55"} Dec 16 12:20:13 crc kubenswrapper[4805]: I1216 12:20:13.652361 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.652332618 podStartE2EDuration="2.652332618s" podCreationTimestamp="2025-12-16 12:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:20:13.638016267 +0000 UTC m=+1487.356274072" watchObservedRunningTime="2025-12-16 12:20:13.652332618 +0000 UTC m=+1487.370590443" Dec 16 12:20:15 crc kubenswrapper[4805]: I1216 12:20:15.050637 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 16 12:20:15 crc kubenswrapper[4805]: I1216 12:20:15.050941 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 16 12:20:17 crc kubenswrapper[4805]: I1216 12:20:17.233986 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 16 12:20:17 crc kubenswrapper[4805]: I1216 12:20:17.234304 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 16 12:20:17 crc kubenswrapper[4805]: I1216 12:20:17.330276 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 16 12:20:18 crc kubenswrapper[4805]: I1216 12:20:18.246325 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="31340444-d4f0-468b-9acb-ca27b87165a9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:20:18 crc kubenswrapper[4805]: I1216 12:20:18.246349 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="31340444-d4f0-468b-9acb-ca27b87165a9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:20:20 crc kubenswrapper[4805]: I1216 12:20:20.051317 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 16 12:20:20 crc kubenswrapper[4805]: I1216 12:20:20.051689 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 16 12:20:21 crc kubenswrapper[4805]: I1216 12:20:21.067350 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ba40942d-c8ca-45da-b36d-7e447dac985e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:20:21 crc kubenswrapper[4805]: I1216 12:20:21.068002 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ba40942d-c8ca-45da-b36d-7e447dac985e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:20:22 crc kubenswrapper[4805]: I1216 12:20:22.330391 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 16 12:20:22 crc kubenswrapper[4805]: I1216 12:20:22.362560 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 16 12:20:22 crc kubenswrapper[4805]: I1216 12:20:22.761211 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 16 12:20:27 crc kubenswrapper[4805]: I1216 12:20:27.240536 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 16 12:20:27 crc kubenswrapper[4805]: I1216 12:20:27.241645 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 16 12:20:27 crc kubenswrapper[4805]: I1216 12:20:27.244265 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 16 12:20:27 crc kubenswrapper[4805]: I1216 12:20:27.254923 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 16 12:20:27 crc kubenswrapper[4805]: I1216 12:20:27.780272 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 16 12:20:27 crc kubenswrapper[4805]: I1216 12:20:27.789016 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 16 12:20:28 crc kubenswrapper[4805]: I1216 12:20:28.747468 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 16 12:20:30 crc kubenswrapper[4805]: I1216 12:20:30.058678 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 16 12:20:30 crc kubenswrapper[4805]: I1216 12:20:30.059552 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 16 12:20:30 crc kubenswrapper[4805]: I1216 12:20:30.065066 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 16 12:20:30 crc kubenswrapper[4805]: I1216 12:20:30.817951 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 16 12:20:39 crc kubenswrapper[4805]: I1216 12:20:39.310159 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 16 12:20:40 crc kubenswrapper[4805]: I1216 12:20:40.766252 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 16 12:20:44 crc kubenswrapper[4805]: I1216 12:20:44.624803 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="55267a44-aaa0-494b-922a-014b08eddcd9" containerName="rabbitmq" containerID="cri-o://1e25954fb7d4a311e570da01a2cad43843fd77f007db317cf2ff46c78efb4f16" gracePeriod=604795 Dec 16 12:20:45 crc kubenswrapper[4805]: I1216 12:20:45.235937 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="93a741b0-5bcd-407b-8af7-90bd52380217" containerName="rabbitmq" containerID="cri-o://16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686" gracePeriod=604796 Dec 16 12:20:48 crc kubenswrapper[4805]: I1216 12:20:48.284552 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="55267a44-aaa0-494b-922a-014b08eddcd9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Dec 16 12:20:48 crc kubenswrapper[4805]: I1216 12:20:48.691371 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="93a741b0-5bcd-407b-8af7-90bd52380217" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.017974 4805 generic.go:334] "Generic (PLEG): container finished" podID="55267a44-aaa0-494b-922a-014b08eddcd9" containerID="1e25954fb7d4a311e570da01a2cad43843fd77f007db317cf2ff46c78efb4f16" exitCode=0 Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.018105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55267a44-aaa0-494b-922a-014b08eddcd9","Type":"ContainerDied","Data":"1e25954fb7d4a311e570da01a2cad43843fd77f007db317cf2ff46c78efb4f16"} Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.311947 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.423285 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-erlang-cookie\") pod \"55267a44-aaa0-494b-922a-014b08eddcd9\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.423354 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"55267a44-aaa0-494b-922a-014b08eddcd9\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.423382 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-config-data\") pod \"55267a44-aaa0-494b-922a-014b08eddcd9\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.423453 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-tls\") pod \"55267a44-aaa0-494b-922a-014b08eddcd9\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.423507 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-plugins\") pod \"55267a44-aaa0-494b-922a-014b08eddcd9\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.423532 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-confd\") pod \"55267a44-aaa0-494b-922a-014b08eddcd9\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.423557 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-server-conf\") pod \"55267a44-aaa0-494b-922a-014b08eddcd9\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.423638 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55267a44-aaa0-494b-922a-014b08eddcd9-erlang-cookie-secret\") pod \"55267a44-aaa0-494b-922a-014b08eddcd9\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.423693 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55267a44-aaa0-494b-922a-014b08eddcd9-pod-info\") pod \"55267a44-aaa0-494b-922a-014b08eddcd9\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.423740 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2smx4\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-kube-api-access-2smx4\") pod \"55267a44-aaa0-494b-922a-014b08eddcd9\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.423774 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-plugins-conf\") pod \"55267a44-aaa0-494b-922a-014b08eddcd9\" (UID: \"55267a44-aaa0-494b-922a-014b08eddcd9\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.424944 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "55267a44-aaa0-494b-922a-014b08eddcd9" (UID: "55267a44-aaa0-494b-922a-014b08eddcd9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.425324 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "55267a44-aaa0-494b-922a-014b08eddcd9" (UID: "55267a44-aaa0-494b-922a-014b08eddcd9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.426192 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.426508 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.426882 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "55267a44-aaa0-494b-922a-014b08eddcd9" (UID: "55267a44-aaa0-494b-922a-014b08eddcd9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.433551 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "55267a44-aaa0-494b-922a-014b08eddcd9" (UID: "55267a44-aaa0-494b-922a-014b08eddcd9"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.441020 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55267a44-aaa0-494b-922a-014b08eddcd9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "55267a44-aaa0-494b-922a-014b08eddcd9" (UID: "55267a44-aaa0-494b-922a-014b08eddcd9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.443888 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/55267a44-aaa0-494b-922a-014b08eddcd9-pod-info" (OuterVolumeSpecName: "pod-info") pod "55267a44-aaa0-494b-922a-014b08eddcd9" (UID: "55267a44-aaa0-494b-922a-014b08eddcd9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.458653 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-kube-api-access-2smx4" (OuterVolumeSpecName: "kube-api-access-2smx4") pod "55267a44-aaa0-494b-922a-014b08eddcd9" (UID: "55267a44-aaa0-494b-922a-014b08eddcd9"). InnerVolumeSpecName "kube-api-access-2smx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.468346 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "55267a44-aaa0-494b-922a-014b08eddcd9" (UID: "55267a44-aaa0-494b-922a-014b08eddcd9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.507203 4805 scope.go:117] "RemoveContainer" containerID="97ab27c6d088b4e5e6c6ccd8de6369894c35f4d52926d691c0e08120147a05b9" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.515610 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-config-data" (OuterVolumeSpecName: "config-data") pod "55267a44-aaa0-494b-922a-014b08eddcd9" (UID: "55267a44-aaa0-494b-922a-014b08eddcd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.531049 4805 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55267a44-aaa0-494b-922a-014b08eddcd9-pod-info\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.531099 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2smx4\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-kube-api-access-2smx4\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.531109 4805 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.531117 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.531167 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.531178 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.531186 4805 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55267a44-aaa0-494b-922a-014b08eddcd9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.570450 4805 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.616604 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-server-conf" (OuterVolumeSpecName: "server-conf") pod "55267a44-aaa0-494b-922a-014b08eddcd9" (UID: "55267a44-aaa0-494b-922a-014b08eddcd9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.633104 4805 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.633170 4805 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55267a44-aaa0-494b-922a-014b08eddcd9-server-conf\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.636242 4805 scope.go:117] "RemoveContainer" containerID="1e25954fb7d4a311e570da01a2cad43843fd77f007db317cf2ff46c78efb4f16" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.671831 4805 scope.go:117] "RemoveContainer" containerID="ba41853f3136944c440c35ba755957daedde0ef9f029dbcc7b426ba254586616" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.684836 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "55267a44-aaa0-494b-922a-014b08eddcd9" (UID: "55267a44-aaa0-494b-922a-014b08eddcd9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.736295 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55267a44-aaa0-494b-922a-014b08eddcd9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.855250 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.940653 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-plugins\") pod \"93a741b0-5bcd-407b-8af7-90bd52380217\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.940738 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-plugins-conf\") pod \"93a741b0-5bcd-407b-8af7-90bd52380217\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.940793 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtpkg\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-kube-api-access-rtpkg\") pod \"93a741b0-5bcd-407b-8af7-90bd52380217\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.940831 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-config-data\") pod \"93a741b0-5bcd-407b-8af7-90bd52380217\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.940885 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-tls\") pod \"93a741b0-5bcd-407b-8af7-90bd52380217\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.940926 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-server-conf\") pod \"93a741b0-5bcd-407b-8af7-90bd52380217\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.940970 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-erlang-cookie\") pod \"93a741b0-5bcd-407b-8af7-90bd52380217\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.941056 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/93a741b0-5bcd-407b-8af7-90bd52380217-erlang-cookie-secret\") pod \"93a741b0-5bcd-407b-8af7-90bd52380217\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.941184 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/93a741b0-5bcd-407b-8af7-90bd52380217-pod-info\") pod \"93a741b0-5bcd-407b-8af7-90bd52380217\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.941234 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-confd\") pod \"93a741b0-5bcd-407b-8af7-90bd52380217\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.941291 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"93a741b0-5bcd-407b-8af7-90bd52380217\" (UID: \"93a741b0-5bcd-407b-8af7-90bd52380217\") " Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.943369 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "93a741b0-5bcd-407b-8af7-90bd52380217" (UID: "93a741b0-5bcd-407b-8af7-90bd52380217"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.944918 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "93a741b0-5bcd-407b-8af7-90bd52380217" (UID: "93a741b0-5bcd-407b-8af7-90bd52380217"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.945252 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "93a741b0-5bcd-407b-8af7-90bd52380217" (UID: "93a741b0-5bcd-407b-8af7-90bd52380217"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.950418 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "93a741b0-5bcd-407b-8af7-90bd52380217" (UID: "93a741b0-5bcd-407b-8af7-90bd52380217"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.955486 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "93a741b0-5bcd-407b-8af7-90bd52380217" (UID: "93a741b0-5bcd-407b-8af7-90bd52380217"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.960355 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-kube-api-access-rtpkg" (OuterVolumeSpecName: "kube-api-access-rtpkg") pod "93a741b0-5bcd-407b-8af7-90bd52380217" (UID: "93a741b0-5bcd-407b-8af7-90bd52380217"). InnerVolumeSpecName "kube-api-access-rtpkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.963264 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a741b0-5bcd-407b-8af7-90bd52380217-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "93a741b0-5bcd-407b-8af7-90bd52380217" (UID: "93a741b0-5bcd-407b-8af7-90bd52380217"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:20:51 crc kubenswrapper[4805]: I1216 12:20:51.980490 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/93a741b0-5bcd-407b-8af7-90bd52380217-pod-info" (OuterVolumeSpecName: "pod-info") pod "93a741b0-5bcd-407b-8af7-90bd52380217" (UID: "93a741b0-5bcd-407b-8af7-90bd52380217"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.037655 4805 generic.go:334] "Generic (PLEG): container finished" podID="93a741b0-5bcd-407b-8af7-90bd52380217" containerID="16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686" exitCode=0 Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.037928 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.039088 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"93a741b0-5bcd-407b-8af7-90bd52380217","Type":"ContainerDied","Data":"16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686"} Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.039175 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"93a741b0-5bcd-407b-8af7-90bd52380217","Type":"ContainerDied","Data":"f3ae1d7a95100f2aee378b22642056b09074b48d7514572f09e95f055164e80b"} Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.039198 4805 scope.go:117] "RemoveContainer" containerID="16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.039966 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.040255 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55267a44-aaa0-494b-922a-014b08eddcd9","Type":"ContainerDied","Data":"dcca8b6be5f16cb4ee871e4c8511c00bcd461954c7833446a3891d0d65646e72"} Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.040782 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-config-data" (OuterVolumeSpecName: "config-data") pod "93a741b0-5bcd-407b-8af7-90bd52380217" (UID: "93a741b0-5bcd-407b-8af7-90bd52380217"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.044270 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.044311 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.044325 4805 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/93a741b0-5bcd-407b-8af7-90bd52380217-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.044358 4805 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/93a741b0-5bcd-407b-8af7-90bd52380217-pod-info\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.044392 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.044405 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.044435 4805 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.044447 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtpkg\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-kube-api-access-rtpkg\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.044458 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.089300 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-server-conf" (OuterVolumeSpecName: "server-conf") pod "93a741b0-5bcd-407b-8af7-90bd52380217" (UID: "93a741b0-5bcd-407b-8af7-90bd52380217"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.105229 4805 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.115201 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "93a741b0-5bcd-407b-8af7-90bd52380217" (UID: "93a741b0-5bcd-407b-8af7-90bd52380217"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.146539 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/93a741b0-5bcd-407b-8af7-90bd52380217-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.146570 4805 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.146582 4805 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/93a741b0-5bcd-407b-8af7-90bd52380217-server-conf\") on node \"crc\" DevicePath \"\"" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.215675 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.218670 4805 scope.go:117] "RemoveContainer" containerID="87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.231567 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.248628 4805 scope.go:117] "RemoveContainer" containerID="16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686" Dec 16 12:20:52 crc kubenswrapper[4805]: E1216 12:20:52.249126 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686\": container with ID starting with 16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686 not found: ID does not exist" containerID="16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.249180 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686"} err="failed to get container status \"16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686\": rpc error: code = NotFound desc = could not find container \"16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686\": container with ID starting with 16da384c12dcff4fba64983e8a411139d390f4effa26608f4d4c564762cc9686 not found: ID does not exist" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.249207 4805 scope.go:117] "RemoveContainer" containerID="87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644" Dec 16 12:20:52 crc kubenswrapper[4805]: E1216 12:20:52.250062 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644\": container with ID starting with 87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644 not found: ID does not exist" containerID="87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.250092 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644"} err="failed to get container status \"87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644\": rpc error: code = NotFound desc = could not find container \"87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644\": container with ID starting with 87a734dd4fa1ce3df2d708801310e4e55538ae62cd5acb8dd81e886b21729644 not found: ID does not exist" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.254786 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 16 12:20:52 crc kubenswrapper[4805]: E1216 12:20:52.255280 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93a741b0-5bcd-407b-8af7-90bd52380217" containerName="setup-container" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.255303 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="93a741b0-5bcd-407b-8af7-90bd52380217" containerName="setup-container" Dec 16 12:20:52 crc kubenswrapper[4805]: E1216 12:20:52.255337 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55267a44-aaa0-494b-922a-014b08eddcd9" containerName="setup-container" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.255346 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="55267a44-aaa0-494b-922a-014b08eddcd9" containerName="setup-container" Dec 16 12:20:52 crc kubenswrapper[4805]: E1216 12:20:52.255357 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93a741b0-5bcd-407b-8af7-90bd52380217" containerName="rabbitmq" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.255363 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="93a741b0-5bcd-407b-8af7-90bd52380217" containerName="rabbitmq" Dec 16 12:20:52 crc kubenswrapper[4805]: E1216 12:20:52.255378 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55267a44-aaa0-494b-922a-014b08eddcd9" containerName="rabbitmq" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.255384 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="55267a44-aaa0-494b-922a-014b08eddcd9" containerName="rabbitmq" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.255592 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="93a741b0-5bcd-407b-8af7-90bd52380217" containerName="rabbitmq" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.255615 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="55267a44-aaa0-494b-922a-014b08eddcd9" containerName="rabbitmq" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.260655 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.267365 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.267575 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.268158 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.268205 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.267369 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.268374 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.268438 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-twxjr" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.298764 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.349301 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.349401 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.349432 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/954917f7-4d5d-4dac-9621-f3c281539cf0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.349463 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.349498 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/954917f7-4d5d-4dac-9621-f3c281539cf0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.349526 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/954917f7-4d5d-4dac-9621-f3c281539cf0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.349552 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.349576 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.349605 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/954917f7-4d5d-4dac-9621-f3c281539cf0-config-data\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.349638 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdh2c\" (UniqueName: \"kubernetes.io/projected/954917f7-4d5d-4dac-9621-f3c281539cf0-kube-api-access-gdh2c\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.349664 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/954917f7-4d5d-4dac-9621-f3c281539cf0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.389361 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.398072 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.429758 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.437853 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.441657 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.441843 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.442120 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.442415 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.445521 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k9stn" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.445695 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.445853 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.451041 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.451083 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/954917f7-4d5d-4dac-9621-f3c281539cf0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.451112 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.451155 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/954917f7-4d5d-4dac-9621-f3c281539cf0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.451178 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/954917f7-4d5d-4dac-9621-f3c281539cf0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.451197 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.451216 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.451236 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/954917f7-4d5d-4dac-9621-f3c281539cf0-config-data\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.451260 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdh2c\" (UniqueName: \"kubernetes.io/projected/954917f7-4d5d-4dac-9621-f3c281539cf0-kube-api-access-gdh2c\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.451278 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/954917f7-4d5d-4dac-9621-f3c281539cf0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.451331 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.454773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.457921 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.458535 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/954917f7-4d5d-4dac-9621-f3c281539cf0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.458773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.461099 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/954917f7-4d5d-4dac-9621-f3c281539cf0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.463607 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/954917f7-4d5d-4dac-9621-f3c281539cf0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.463688 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.467064 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/954917f7-4d5d-4dac-9621-f3c281539cf0-config-data\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.469078 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/954917f7-4d5d-4dac-9621-f3c281539cf0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.469094 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.470151 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/954917f7-4d5d-4dac-9621-f3c281539cf0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.520053 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdh2c\" (UniqueName: \"kubernetes.io/projected/954917f7-4d5d-4dac-9621-f3c281539cf0-kube-api-access-gdh2c\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.538949 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"954917f7-4d5d-4dac-9621-f3c281539cf0\") " pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.553078 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.553120 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/09870268-6496-4840-bd93-b9ae456cb54a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.553163 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqnxr\" (UniqueName: \"kubernetes.io/projected/09870268-6496-4840-bd93-b9ae456cb54a-kube-api-access-jqnxr\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.553233 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/09870268-6496-4840-bd93-b9ae456cb54a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.553316 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.553346 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/09870268-6496-4840-bd93-b9ae456cb54a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.553373 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.553408 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/09870268-6496-4840-bd93-b9ae456cb54a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.553450 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09870268-6496-4840-bd93-b9ae456cb54a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.553501 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.553526 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.561118 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55267a44-aaa0-494b-922a-014b08eddcd9" path="/var/lib/kubelet/pods/55267a44-aaa0-494b-922a-014b08eddcd9/volumes" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.562480 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93a741b0-5bcd-407b-8af7-90bd52380217" path="/var/lib/kubelet/pods/93a741b0-5bcd-407b-8af7-90bd52380217/volumes" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.584097 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.664873 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.664952 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/09870268-6496-4840-bd93-b9ae456cb54a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.665006 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.665045 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/09870268-6496-4840-bd93-b9ae456cb54a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.665120 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09870268-6496-4840-bd93-b9ae456cb54a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.665264 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.665307 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.665377 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.665401 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/09870268-6496-4840-bd93-b9ae456cb54a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.665433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqnxr\" (UniqueName: \"kubernetes.io/projected/09870268-6496-4840-bd93-b9ae456cb54a-kube-api-access-jqnxr\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.665481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/09870268-6496-4840-bd93-b9ae456cb54a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.672793 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/09870268-6496-4840-bd93-b9ae456cb54a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.673552 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.673880 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.673907 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.675062 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/09870268-6496-4840-bd93-b9ae456cb54a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.676674 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09870268-6496-4840-bd93-b9ae456cb54a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.678049 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.683197 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/09870268-6496-4840-bd93-b9ae456cb54a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.686308 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/09870268-6496-4840-bd93-b9ae456cb54a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.688663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/09870268-6496-4840-bd93-b9ae456cb54a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.694931 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqnxr\" (UniqueName: \"kubernetes.io/projected/09870268-6496-4840-bd93-b9ae456cb54a-kube-api-access-jqnxr\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.725429 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"09870268-6496-4840-bd93-b9ae456cb54a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:52 crc kubenswrapper[4805]: I1216 12:20:52.814559 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.186362 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-lmz8l"] Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.189456 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.204559 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.235992 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.281479 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.281623 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.281652 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-config\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.281679 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.281747 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.281764 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr7gj\" (UniqueName: \"kubernetes.io/projected/1b3f1474-85f4-4091-9e7e-44848d09f594-kube-api-access-sr7gj\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.281795 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-svc\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.285697 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-lmz8l"] Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.376526 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 16 12:20:53 crc kubenswrapper[4805]: W1216 12:20:53.380922 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09870268_6496_4840_bd93_b9ae456cb54a.slice/crio-f79552f8f8b3e95c90a0ce993e23bf7394c985faf6e44c2a399025a06b8dc093 WatchSource:0}: Error finding container f79552f8f8b3e95c90a0ce993e23bf7394c985faf6e44c2a399025a06b8dc093: Status 404 returned error can't find the container with id f79552f8f8b3e95c90a0ce993e23bf7394c985faf6e44c2a399025a06b8dc093 Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.383243 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.383331 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-config\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.383376 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.383480 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.383556 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr7gj\" (UniqueName: \"kubernetes.io/projected/1b3f1474-85f4-4091-9e7e-44848d09f594-kube-api-access-sr7gj\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.383604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-svc\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.383694 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.385066 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.385986 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.386903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-config\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.387652 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-svc\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.388277 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.389831 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.403160 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr7gj\" (UniqueName: \"kubernetes.io/projected/1b3f1474-85f4-4091-9e7e-44848d09f594-kube-api-access-sr7gj\") pod \"dnsmasq-dns-67b789f86c-lmz8l\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:53 crc kubenswrapper[4805]: I1216 12:20:53.616896 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:20:54 crc kubenswrapper[4805]: I1216 12:20:54.052792 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-lmz8l"] Dec 16 12:20:54 crc kubenswrapper[4805]: I1216 12:20:54.078696 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"09870268-6496-4840-bd93-b9ae456cb54a","Type":"ContainerStarted","Data":"f79552f8f8b3e95c90a0ce993e23bf7394c985faf6e44c2a399025a06b8dc093"} Dec 16 12:20:54 crc kubenswrapper[4805]: I1216 12:20:54.082679 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"954917f7-4d5d-4dac-9621-f3c281539cf0","Type":"ContainerStarted","Data":"63638d99c0f0590cdce72939cf16d038233a8e990b7aeb800f1e8961b373baf1"} Dec 16 12:20:55 crc kubenswrapper[4805]: I1216 12:20:55.095386 4805 generic.go:334] "Generic (PLEG): container finished" podID="1b3f1474-85f4-4091-9e7e-44848d09f594" containerID="8798a5f5bfbf0507e2d4936e3816c7b7fc5cfb16b0f6154667fa6e888977b9d4" exitCode=0 Dec 16 12:20:55 crc kubenswrapper[4805]: I1216 12:20:55.095511 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" event={"ID":"1b3f1474-85f4-4091-9e7e-44848d09f594","Type":"ContainerDied","Data":"8798a5f5bfbf0507e2d4936e3816c7b7fc5cfb16b0f6154667fa6e888977b9d4"} Dec 16 12:20:55 crc kubenswrapper[4805]: I1216 12:20:55.095933 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" event={"ID":"1b3f1474-85f4-4091-9e7e-44848d09f594","Type":"ContainerStarted","Data":"ee7318d387ff834fcfd2eda4560451da77ccdc1fde539da39057479ac96db93e"} Dec 16 12:20:55 crc kubenswrapper[4805]: I1216 12:20:55.097823 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"954917f7-4d5d-4dac-9621-f3c281539cf0","Type":"ContainerStarted","Data":"3d492145557f67e8588af2a34a219ae7564eeb91496cf3fdb7dcaaeb4add3782"} Dec 16 12:20:56 crc kubenswrapper[4805]: I1216 12:20:56.124635 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"09870268-6496-4840-bd93-b9ae456cb54a","Type":"ContainerStarted","Data":"aec400287c53c453248ff43539f3a4902fa286cff0702cb12d3f0b2ea492a387"} Dec 16 12:20:56 crc kubenswrapper[4805]: I1216 12:20:56.127886 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" event={"ID":"1b3f1474-85f4-4091-9e7e-44848d09f594","Type":"ContainerStarted","Data":"a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8"} Dec 16 12:20:56 crc kubenswrapper[4805]: I1216 12:20:56.187184 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" podStartSLOduration=3.187162655 podStartE2EDuration="3.187162655s" podCreationTimestamp="2025-12-16 12:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:20:56.172221277 +0000 UTC m=+1529.890479102" watchObservedRunningTime="2025-12-16 12:20:56.187162655 +0000 UTC m=+1529.905420470" Dec 16 12:20:57 crc kubenswrapper[4805]: I1216 12:20:57.071463 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:20:57 crc kubenswrapper[4805]: I1216 12:20:57.071531 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:20:57 crc kubenswrapper[4805]: I1216 12:20:57.136549 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:21:03 crc kubenswrapper[4805]: I1216 12:21:03.618298 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:21:03 crc kubenswrapper[4805]: I1216 12:21:03.783001 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-5ncmd"] Dec 16 12:21:03 crc kubenswrapper[4805]: I1216 12:21:03.783293 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" podUID="4bbdc89b-8d50-4756-83ae-eaeeaa419579" containerName="dnsmasq-dns" containerID="cri-o://24da059aa451978d31b1b1e30b3cf8fa240a3c2f3a314cb1c3393312ee25a8ef" gracePeriod=10 Dec 16 12:21:03 crc kubenswrapper[4805]: I1216 12:21:03.847799 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" podUID="4bbdc89b-8d50-4756-83ae-eaeeaa419579" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.201:5353: connect: connection refused" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.024630 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79dc84bdb7-mnp8m"] Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.031501 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.115232 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79dc84bdb7-mnp8m"] Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.127186 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-ovsdbserver-nb\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.127249 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-config\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.128009 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-dns-svc\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.128088 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbm28\" (UniqueName: \"kubernetes.io/projected/d507896c-ad5d-4fd8-9df2-22feaa838e8f-kube-api-access-kbm28\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.128117 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-openstack-edpm-ipam\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.128135 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-dns-swift-storage-0\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.128222 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-ovsdbserver-sb\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.217893 4805 generic.go:334] "Generic (PLEG): container finished" podID="4bbdc89b-8d50-4756-83ae-eaeeaa419579" containerID="24da059aa451978d31b1b1e30b3cf8fa240a3c2f3a314cb1c3393312ee25a8ef" exitCode=0 Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.217946 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" event={"ID":"4bbdc89b-8d50-4756-83ae-eaeeaa419579","Type":"ContainerDied","Data":"24da059aa451978d31b1b1e30b3cf8fa240a3c2f3a314cb1c3393312ee25a8ef"} Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.229608 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbm28\" (UniqueName: \"kubernetes.io/projected/d507896c-ad5d-4fd8-9df2-22feaa838e8f-kube-api-access-kbm28\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.229658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-openstack-edpm-ipam\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.229675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-dns-swift-storage-0\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.229734 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-ovsdbserver-sb\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.229849 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-ovsdbserver-nb\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.229880 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-config\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.229904 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-dns-svc\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.231036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-ovsdbserver-nb\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.231069 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-dns-swift-storage-0\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.231131 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-dns-svc\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.232682 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-openstack-edpm-ipam\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.233050 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-config\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.233888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d507896c-ad5d-4fd8-9df2-22feaa838e8f-ovsdbserver-sb\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.281709 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbm28\" (UniqueName: \"kubernetes.io/projected/d507896c-ad5d-4fd8-9df2-22feaa838e8f-kube-api-access-kbm28\") pod \"dnsmasq-dns-79dc84bdb7-mnp8m\" (UID: \"d507896c-ad5d-4fd8-9df2-22feaa838e8f\") " pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.353970 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.368620 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.435083 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-sb\") pod \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.435538 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-svc\") pod \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.435656 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8sxn\" (UniqueName: \"kubernetes.io/projected/4bbdc89b-8d50-4756-83ae-eaeeaa419579-kube-api-access-r8sxn\") pod \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.435723 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-swift-storage-0\") pod \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.435743 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-config\") pod \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.435766 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-nb\") pod \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.448039 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bbdc89b-8d50-4756-83ae-eaeeaa419579-kube-api-access-r8sxn" (OuterVolumeSpecName: "kube-api-access-r8sxn") pod "4bbdc89b-8d50-4756-83ae-eaeeaa419579" (UID: "4bbdc89b-8d50-4756-83ae-eaeeaa419579"). InnerVolumeSpecName "kube-api-access-r8sxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.536441 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bbdc89b-8d50-4756-83ae-eaeeaa419579" (UID: "4bbdc89b-8d50-4756-83ae-eaeeaa419579"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.559487 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-svc\") pod \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\" (UID: \"4bbdc89b-8d50-4756-83ae-eaeeaa419579\") " Dec 16 12:21:04 crc kubenswrapper[4805]: W1216 12:21:04.559762 4805 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4bbdc89b-8d50-4756-83ae-eaeeaa419579/volumes/kubernetes.io~configmap/dns-svc Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.559817 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bbdc89b-8d50-4756-83ae-eaeeaa419579" (UID: "4bbdc89b-8d50-4756-83ae-eaeeaa419579"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.561768 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.561795 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8sxn\" (UniqueName: \"kubernetes.io/projected/4bbdc89b-8d50-4756-83ae-eaeeaa419579-kube-api-access-r8sxn\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.566458 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4bbdc89b-8d50-4756-83ae-eaeeaa419579" (UID: "4bbdc89b-8d50-4756-83ae-eaeeaa419579"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.573172 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bbdc89b-8d50-4756-83ae-eaeeaa419579" (UID: "4bbdc89b-8d50-4756-83ae-eaeeaa419579"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.573959 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-config" (OuterVolumeSpecName: "config") pod "4bbdc89b-8d50-4756-83ae-eaeeaa419579" (UID: "4bbdc89b-8d50-4756-83ae-eaeeaa419579"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.596680 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bbdc89b-8d50-4756-83ae-eaeeaa419579" (UID: "4bbdc89b-8d50-4756-83ae-eaeeaa419579"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.663476 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.663724 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.663735 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.663744 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bbdc89b-8d50-4756-83ae-eaeeaa419579-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:04 crc kubenswrapper[4805]: I1216 12:21:04.882836 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79dc84bdb7-mnp8m"] Dec 16 12:21:05 crc kubenswrapper[4805]: I1216 12:21:05.230128 4805 generic.go:334] "Generic (PLEG): container finished" podID="d507896c-ad5d-4fd8-9df2-22feaa838e8f" containerID="fed263d7ed3f8987b4971736635e1004bbb41e225211f7785621d25c80ce4b4f" exitCode=0 Dec 16 12:21:05 crc kubenswrapper[4805]: I1216 12:21:05.230268 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" event={"ID":"d507896c-ad5d-4fd8-9df2-22feaa838e8f","Type":"ContainerDied","Data":"fed263d7ed3f8987b4971736635e1004bbb41e225211f7785621d25c80ce4b4f"} Dec 16 12:21:05 crc kubenswrapper[4805]: I1216 12:21:05.230775 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" event={"ID":"d507896c-ad5d-4fd8-9df2-22feaa838e8f","Type":"ContainerStarted","Data":"7bb0626df865d104923fcbdaa1a61d3d5ff1bf39808185c1578c217addeb34ce"} Dec 16 12:21:05 crc kubenswrapper[4805]: I1216 12:21:05.236594 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" event={"ID":"4bbdc89b-8d50-4756-83ae-eaeeaa419579","Type":"ContainerDied","Data":"0754bf20fa20c0b4a9fcc8940a3210872b990ca3a99c19367da2b96779e44a81"} Dec 16 12:21:05 crc kubenswrapper[4805]: I1216 12:21:05.236682 4805 scope.go:117] "RemoveContainer" containerID="24da059aa451978d31b1b1e30b3cf8fa240a3c2f3a314cb1c3393312ee25a8ef" Dec 16 12:21:05 crc kubenswrapper[4805]: I1216 12:21:05.236743 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-5ncmd" Dec 16 12:21:05 crc kubenswrapper[4805]: I1216 12:21:05.374425 4805 scope.go:117] "RemoveContainer" containerID="ccf0006284aecc1fee2fbef45185e02937f14452739c6f4f8060745ee44b67d6" Dec 16 12:21:05 crc kubenswrapper[4805]: I1216 12:21:05.418909 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-5ncmd"] Dec 16 12:21:05 crc kubenswrapper[4805]: I1216 12:21:05.428134 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-5ncmd"] Dec 16 12:21:06 crc kubenswrapper[4805]: I1216 12:21:06.287346 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" event={"ID":"d507896c-ad5d-4fd8-9df2-22feaa838e8f","Type":"ContainerStarted","Data":"38532423e7ba98e11ace44b5b07163bc0ceb34f8d0bba1179952c3e3b86c1595"} Dec 16 12:21:06 crc kubenswrapper[4805]: I1216 12:21:06.287650 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:06 crc kubenswrapper[4805]: I1216 12:21:06.312997 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" podStartSLOduration=3.312974109 podStartE2EDuration="3.312974109s" podCreationTimestamp="2025-12-16 12:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:21:06.30811522 +0000 UTC m=+1540.026373055" watchObservedRunningTime="2025-12-16 12:21:06.312974109 +0000 UTC m=+1540.031231924" Dec 16 12:21:06 crc kubenswrapper[4805]: I1216 12:21:06.535447 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bbdc89b-8d50-4756-83ae-eaeeaa419579" path="/var/lib/kubelet/pods/4bbdc89b-8d50-4756-83ae-eaeeaa419579/volumes" Dec 16 12:21:14 crc kubenswrapper[4805]: I1216 12:21:14.355398 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79dc84bdb7-mnp8m" Dec 16 12:21:14 crc kubenswrapper[4805]: I1216 12:21:14.429586 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-lmz8l"] Dec 16 12:21:14 crc kubenswrapper[4805]: I1216 12:21:14.429905 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" podUID="1b3f1474-85f4-4091-9e7e-44848d09f594" containerName="dnsmasq-dns" containerID="cri-o://a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8" gracePeriod=10 Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.065865 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.220969 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-config\") pod \"1b3f1474-85f4-4091-9e7e-44848d09f594\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.221068 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-sb\") pod \"1b3f1474-85f4-4091-9e7e-44848d09f594\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.221156 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-openstack-edpm-ipam\") pod \"1b3f1474-85f4-4091-9e7e-44848d09f594\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.221350 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr7gj\" (UniqueName: \"kubernetes.io/projected/1b3f1474-85f4-4091-9e7e-44848d09f594-kube-api-access-sr7gj\") pod \"1b3f1474-85f4-4091-9e7e-44848d09f594\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.221396 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-swift-storage-0\") pod \"1b3f1474-85f4-4091-9e7e-44848d09f594\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.221421 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-svc\") pod \"1b3f1474-85f4-4091-9e7e-44848d09f594\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.221444 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-nb\") pod \"1b3f1474-85f4-4091-9e7e-44848d09f594\" (UID: \"1b3f1474-85f4-4091-9e7e-44848d09f594\") " Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.228469 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3f1474-85f4-4091-9e7e-44848d09f594-kube-api-access-sr7gj" (OuterVolumeSpecName: "kube-api-access-sr7gj") pod "1b3f1474-85f4-4091-9e7e-44848d09f594" (UID: "1b3f1474-85f4-4091-9e7e-44848d09f594"). InnerVolumeSpecName "kube-api-access-sr7gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.288470 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b3f1474-85f4-4091-9e7e-44848d09f594" (UID: "1b3f1474-85f4-4091-9e7e-44848d09f594"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.291784 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "1b3f1474-85f4-4091-9e7e-44848d09f594" (UID: "1b3f1474-85f4-4091-9e7e-44848d09f594"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.296773 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-config" (OuterVolumeSpecName: "config") pod "1b3f1474-85f4-4091-9e7e-44848d09f594" (UID: "1b3f1474-85f4-4091-9e7e-44848d09f594"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.306220 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b3f1474-85f4-4091-9e7e-44848d09f594" (UID: "1b3f1474-85f4-4091-9e7e-44848d09f594"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.308320 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b3f1474-85f4-4091-9e7e-44848d09f594" (UID: "1b3f1474-85f4-4091-9e7e-44848d09f594"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.311919 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1b3f1474-85f4-4091-9e7e-44848d09f594" (UID: "1b3f1474-85f4-4091-9e7e-44848d09f594"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.323164 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.323196 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.323208 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.323217 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-config\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.323225 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.323233 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b3f1474-85f4-4091-9e7e-44848d09f594-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.323243 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr7gj\" (UniqueName: \"kubernetes.io/projected/1b3f1474-85f4-4091-9e7e-44848d09f594-kube-api-access-sr7gj\") on node \"crc\" DevicePath \"\"" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.378515 4805 generic.go:334] "Generic (PLEG): container finished" podID="1b3f1474-85f4-4091-9e7e-44848d09f594" containerID="a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8" exitCode=0 Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.378561 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" event={"ID":"1b3f1474-85f4-4091-9e7e-44848d09f594","Type":"ContainerDied","Data":"a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8"} Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.378580 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.378614 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-lmz8l" event={"ID":"1b3f1474-85f4-4091-9e7e-44848d09f594","Type":"ContainerDied","Data":"ee7318d387ff834fcfd2eda4560451da77ccdc1fde539da39057479ac96db93e"} Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.378632 4805 scope.go:117] "RemoveContainer" containerID="a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.399329 4805 scope.go:117] "RemoveContainer" containerID="8798a5f5bfbf0507e2d4936e3816c7b7fc5cfb16b0f6154667fa6e888977b9d4" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.411523 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-lmz8l"] Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.423444 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-lmz8l"] Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.436175 4805 scope.go:117] "RemoveContainer" containerID="a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8" Dec 16 12:21:15 crc kubenswrapper[4805]: E1216 12:21:15.436664 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8\": container with ID starting with a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8 not found: ID does not exist" containerID="a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.436696 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8"} err="failed to get container status \"a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8\": rpc error: code = NotFound desc = could not find container \"a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8\": container with ID starting with a551bf46ddd3cba816083622715a3ee901f5669c08bb610ce9a01847040be7b8 not found: ID does not exist" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.436728 4805 scope.go:117] "RemoveContainer" containerID="8798a5f5bfbf0507e2d4936e3816c7b7fc5cfb16b0f6154667fa6e888977b9d4" Dec 16 12:21:15 crc kubenswrapper[4805]: E1216 12:21:15.437006 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8798a5f5bfbf0507e2d4936e3816c7b7fc5cfb16b0f6154667fa6e888977b9d4\": container with ID starting with 8798a5f5bfbf0507e2d4936e3816c7b7fc5cfb16b0f6154667fa6e888977b9d4 not found: ID does not exist" containerID="8798a5f5bfbf0507e2d4936e3816c7b7fc5cfb16b0f6154667fa6e888977b9d4" Dec 16 12:21:15 crc kubenswrapper[4805]: I1216 12:21:15.437032 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8798a5f5bfbf0507e2d4936e3816c7b7fc5cfb16b0f6154667fa6e888977b9d4"} err="failed to get container status \"8798a5f5bfbf0507e2d4936e3816c7b7fc5cfb16b0f6154667fa6e888977b9d4\": rpc error: code = NotFound desc = could not find container \"8798a5f5bfbf0507e2d4936e3816c7b7fc5cfb16b0f6154667fa6e888977b9d4\": container with ID starting with 8798a5f5bfbf0507e2d4936e3816c7b7fc5cfb16b0f6154667fa6e888977b9d4 not found: ID does not exist" Dec 16 12:21:16 crc kubenswrapper[4805]: I1216 12:21:16.533490 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b3f1474-85f4-4091-9e7e-44848d09f594" path="/var/lib/kubelet/pods/1b3f1474-85f4-4091-9e7e-44848d09f594/volumes" Dec 16 12:21:27 crc kubenswrapper[4805]: I1216 12:21:27.094045 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:21:27 crc kubenswrapper[4805]: I1216 12:21:27.094530 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:21:27 crc kubenswrapper[4805]: I1216 12:21:27.514880 4805 generic.go:334] "Generic (PLEG): container finished" podID="954917f7-4d5d-4dac-9621-f3c281539cf0" containerID="3d492145557f67e8588af2a34a219ae7564eeb91496cf3fdb7dcaaeb4add3782" exitCode=0 Dec 16 12:21:27 crc kubenswrapper[4805]: I1216 12:21:27.514990 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"954917f7-4d5d-4dac-9621-f3c281539cf0","Type":"ContainerDied","Data":"3d492145557f67e8588af2a34a219ae7564eeb91496cf3fdb7dcaaeb4add3782"} Dec 16 12:21:27 crc kubenswrapper[4805]: I1216 12:21:27.522949 4805 generic.go:334] "Generic (PLEG): container finished" podID="09870268-6496-4840-bd93-b9ae456cb54a" containerID="aec400287c53c453248ff43539f3a4902fa286cff0702cb12d3f0b2ea492a387" exitCode=0 Dec 16 12:21:27 crc kubenswrapper[4805]: I1216 12:21:27.522991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"09870268-6496-4840-bd93-b9ae456cb54a","Type":"ContainerDied","Data":"aec400287c53c453248ff43539f3a4902fa286cff0702cb12d3f0b2ea492a387"} Dec 16 12:21:28 crc kubenswrapper[4805]: I1216 12:21:28.589000 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"954917f7-4d5d-4dac-9621-f3c281539cf0","Type":"ContainerStarted","Data":"6b7b1a046ec2fffff10ddc5952782c3ecdfb1df38c1ee8983d5aeac48125f3d0"} Dec 16 12:21:28 crc kubenswrapper[4805]: I1216 12:21:28.590490 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"09870268-6496-4840-bd93-b9ae456cb54a","Type":"ContainerStarted","Data":"10a4b30b77d367512f5b64ddb91831c4c5dbc773338cad844a21d93506fd3b48"} Dec 16 12:21:28 crc kubenswrapper[4805]: I1216 12:21:28.593867 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 16 12:21:28 crc kubenswrapper[4805]: I1216 12:21:28.601440 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:21:28 crc kubenswrapper[4805]: I1216 12:21:28.627988 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.627971001 podStartE2EDuration="36.627971001s" podCreationTimestamp="2025-12-16 12:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:21:28.624948475 +0000 UTC m=+1562.343206290" watchObservedRunningTime="2025-12-16 12:21:28.627971001 +0000 UTC m=+1562.346228816" Dec 16 12:21:28 crc kubenswrapper[4805]: I1216 12:21:28.662674 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.662652946 podStartE2EDuration="36.662652946s" podCreationTimestamp="2025-12-16 12:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:21:28.650433795 +0000 UTC m=+1562.368691620" watchObservedRunningTime="2025-12-16 12:21:28.662652946 +0000 UTC m=+1562.380910761" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.546322 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2"] Dec 16 12:21:35 crc kubenswrapper[4805]: E1216 12:21:35.547447 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3f1474-85f4-4091-9e7e-44848d09f594" containerName="init" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.547467 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3f1474-85f4-4091-9e7e-44848d09f594" containerName="init" Dec 16 12:21:35 crc kubenswrapper[4805]: E1216 12:21:35.547490 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbdc89b-8d50-4756-83ae-eaeeaa419579" containerName="init" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.547497 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbdc89b-8d50-4756-83ae-eaeeaa419579" containerName="init" Dec 16 12:21:35 crc kubenswrapper[4805]: E1216 12:21:35.547521 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbdc89b-8d50-4756-83ae-eaeeaa419579" containerName="dnsmasq-dns" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.547531 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbdc89b-8d50-4756-83ae-eaeeaa419579" containerName="dnsmasq-dns" Dec 16 12:21:35 crc kubenswrapper[4805]: E1216 12:21:35.547553 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3f1474-85f4-4091-9e7e-44848d09f594" containerName="dnsmasq-dns" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.547561 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3f1474-85f4-4091-9e7e-44848d09f594" containerName="dnsmasq-dns" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.547844 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbdc89b-8d50-4756-83ae-eaeeaa419579" containerName="dnsmasq-dns" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.547859 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3f1474-85f4-4091-9e7e-44848d09f594" containerName="dnsmasq-dns" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.548810 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.551181 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.552219 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.552337 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.553898 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.571312 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2"] Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.590596 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.590966 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.591049 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.591120 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d57xv\" (UniqueName: \"kubernetes.io/projected/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-kube-api-access-d57xv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.692756 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.692926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.692999 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.693039 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d57xv\" (UniqueName: \"kubernetes.io/projected/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-kube-api-access-d57xv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.701231 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.706350 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.710725 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d57xv\" (UniqueName: \"kubernetes.io/projected/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-kube-api-access-d57xv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.716851 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:35 crc kubenswrapper[4805]: I1216 12:21:35.875381 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:21:36 crc kubenswrapper[4805]: I1216 12:21:36.976622 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2"] Dec 16 12:21:37 crc kubenswrapper[4805]: I1216 12:21:37.647355 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" event={"ID":"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec","Type":"ContainerStarted","Data":"33ee75854a6ce95bc7f0454cb2468fc4c1ca20bd6b1edc83c2739e1b1141989e"} Dec 16 12:21:42 crc kubenswrapper[4805]: I1216 12:21:42.587422 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 16 12:21:42 crc kubenswrapper[4805]: I1216 12:21:42.818333 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 16 12:21:51 crc kubenswrapper[4805]: I1216 12:21:51.985481 4805 scope.go:117] "RemoveContainer" containerID="985f88c1f9a128dbb9319d7a54a3e3a7b4e72ec8f5b8e705eb481b82a7cd4102" Dec 16 12:21:55 crc kubenswrapper[4805]: E1216 12:21:55.734745 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:18.0-fr4-latest" Dec 16 12:21:55 crc kubenswrapper[4805]: E1216 12:21:55.737034 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 16 12:21:55 crc kubenswrapper[4805]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:18.0-fr4-latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Dec 16 12:21:55 crc kubenswrapper[4805]: - hosts: all Dec 16 12:21:55 crc kubenswrapper[4805]: strategy: linear Dec 16 12:21:55 crc kubenswrapper[4805]: tasks: Dec 16 12:21:55 crc kubenswrapper[4805]: - name: Enable podified-repos Dec 16 12:21:55 crc kubenswrapper[4805]: become: true Dec 16 12:21:55 crc kubenswrapper[4805]: ansible.builtin.shell: | Dec 16 12:21:55 crc kubenswrapper[4805]: set -euxo pipefail Dec 16 12:21:55 crc kubenswrapper[4805]: pushd /var/tmp Dec 16 12:21:55 crc kubenswrapper[4805]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Dec 16 12:21:55 crc kubenswrapper[4805]: pushd repo-setup-main Dec 16 12:21:55 crc kubenswrapper[4805]: python3 -m venv ./venv Dec 16 12:21:55 crc kubenswrapper[4805]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Dec 16 12:21:55 crc kubenswrapper[4805]: ./venv/bin/repo-setup current-podified -b antelope Dec 16 12:21:55 crc kubenswrapper[4805]: popd Dec 16 12:21:55 crc kubenswrapper[4805]: rm -rf repo-setup-main Dec 16 12:21:55 crc kubenswrapper[4805]: Dec 16 12:21:55 crc kubenswrapper[4805]: Dec 16 12:21:55 crc kubenswrapper[4805]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Dec 16 12:21:55 crc kubenswrapper[4805]: edpm_override_hosts: openstack-edpm-ipam Dec 16 12:21:55 crc kubenswrapper[4805]: edpm_service_type: repo-setup Dec 16 12:21:55 crc kubenswrapper[4805]: Dec 16 12:21:55 crc kubenswrapper[4805]: Dec 16 12:21:55 crc kubenswrapper[4805]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d57xv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2_openstack(99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Dec 16 12:21:55 crc kubenswrapper[4805]: > logger="UnhandledError" Dec 16 12:21:55 crc kubenswrapper[4805]: I1216 12:21:55.739652 4805 scope.go:117] "RemoveContainer" containerID="1b205051b301473d43bda995381a061cd714e8ba20e2b4d2ed5487efd3a6c9ff" Dec 16 12:21:55 crc kubenswrapper[4805]: E1216 12:21:55.739232 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" podUID="99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec" Dec 16 12:21:55 crc kubenswrapper[4805]: I1216 12:21:55.798495 4805 scope.go:117] "RemoveContainer" containerID="0d12cc96b864154bb1e684e6dd7cc9990d58ab7a48b62303008832bebe1d0a2b" Dec 16 12:21:56 crc kubenswrapper[4805]: E1216 12:21:56.008303 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:18.0-fr4-latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" podUID="99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec" Dec 16 12:21:57 crc kubenswrapper[4805]: I1216 12:21:57.071205 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:21:57 crc kubenswrapper[4805]: I1216 12:21:57.071278 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:21:57 crc kubenswrapper[4805]: I1216 12:21:57.071380 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:21:57 crc kubenswrapper[4805]: I1216 12:21:57.072331 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:21:57 crc kubenswrapper[4805]: I1216 12:21:57.072418 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" gracePeriod=600 Dec 16 12:21:57 crc kubenswrapper[4805]: E1216 12:21:57.200429 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:21:57 crc kubenswrapper[4805]: I1216 12:21:57.882465 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" exitCode=0 Dec 16 12:21:57 crc kubenswrapper[4805]: I1216 12:21:57.882511 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308"} Dec 16 12:21:57 crc kubenswrapper[4805]: I1216 12:21:57.882775 4805 scope.go:117] "RemoveContainer" containerID="52cf8f6f2f746633bfd1f446a1bda7ea7f3c784cb6d567e07a1cc669ed487201" Dec 16 12:21:57 crc kubenswrapper[4805]: I1216 12:21:57.883559 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:21:57 crc kubenswrapper[4805]: E1216 12:21:57.883848 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.449308 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6w6s5"] Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.483356 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6w6s5"] Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.483581 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.527993 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2fg\" (UniqueName: \"kubernetes.io/projected/1bd0a124-bd43-4f11-a762-3089cf100be1-kube-api-access-7d2fg\") pod \"community-operators-6w6s5\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.528183 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-utilities\") pod \"community-operators-6w6s5\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.528389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-catalog-content\") pod \"community-operators-6w6s5\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.629774 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d2fg\" (UniqueName: \"kubernetes.io/projected/1bd0a124-bd43-4f11-a762-3089cf100be1-kube-api-access-7d2fg\") pod \"community-operators-6w6s5\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.629885 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-utilities\") pod \"community-operators-6w6s5\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.630863 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-utilities\") pod \"community-operators-6w6s5\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.631075 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-catalog-content\") pod \"community-operators-6w6s5\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.631533 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-catalog-content\") pod \"community-operators-6w6s5\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.663540 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d2fg\" (UniqueName: \"kubernetes.io/projected/1bd0a124-bd43-4f11-a762-3089cf100be1-kube-api-access-7d2fg\") pod \"community-operators-6w6s5\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:02 crc kubenswrapper[4805]: I1216 12:22:02.816090 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:03 crc kubenswrapper[4805]: I1216 12:22:03.349891 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6w6s5"] Dec 16 12:22:03 crc kubenswrapper[4805]: I1216 12:22:03.996525 4805 generic.go:334] "Generic (PLEG): container finished" podID="1bd0a124-bd43-4f11-a762-3089cf100be1" containerID="4dd082cbc9b97ec59401584316e3c5d8e9c7551f28b035f353416f69a07a7834" exitCode=0 Dec 16 12:22:03 crc kubenswrapper[4805]: I1216 12:22:03.996590 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w6s5" event={"ID":"1bd0a124-bd43-4f11-a762-3089cf100be1","Type":"ContainerDied","Data":"4dd082cbc9b97ec59401584316e3c5d8e9c7551f28b035f353416f69a07a7834"} Dec 16 12:22:03 crc kubenswrapper[4805]: I1216 12:22:03.996841 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w6s5" event={"ID":"1bd0a124-bd43-4f11-a762-3089cf100be1","Type":"ContainerStarted","Data":"36603015cb5d9aa32a4798bd49b1fbb09a5c952e7561367b10f2bb0d2a122a45"} Dec 16 12:22:08 crc kubenswrapper[4805]: I1216 12:22:08.032978 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w6s5" event={"ID":"1bd0a124-bd43-4f11-a762-3089cf100be1","Type":"ContainerStarted","Data":"cefc9c8a2521dd4fb6995b3c3ddbb077543f1846e177438854d5bedfd77589d3"} Dec 16 12:22:09 crc kubenswrapper[4805]: I1216 12:22:09.522846 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:22:09 crc kubenswrapper[4805]: E1216 12:22:09.523230 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:22:10 crc kubenswrapper[4805]: I1216 12:22:10.052814 4805 generic.go:334] "Generic (PLEG): container finished" podID="1bd0a124-bd43-4f11-a762-3089cf100be1" containerID="cefc9c8a2521dd4fb6995b3c3ddbb077543f1846e177438854d5bedfd77589d3" exitCode=0 Dec 16 12:22:10 crc kubenswrapper[4805]: I1216 12:22:10.052913 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w6s5" event={"ID":"1bd0a124-bd43-4f11-a762-3089cf100be1","Type":"ContainerDied","Data":"cefc9c8a2521dd4fb6995b3c3ddbb077543f1846e177438854d5bedfd77589d3"} Dec 16 12:22:11 crc kubenswrapper[4805]: I1216 12:22:11.065949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" event={"ID":"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec","Type":"ContainerStarted","Data":"a898b7bd29aed58cfe864c04a72ac7919077f9c41516bc98aaddf730a80dd754"} Dec 16 12:22:11 crc kubenswrapper[4805]: I1216 12:22:11.083799 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" podStartSLOduration=2.956717125 podStartE2EDuration="36.083777378s" podCreationTimestamp="2025-12-16 12:21:35 +0000 UTC" firstStartedPulling="2025-12-16 12:21:36.987470568 +0000 UTC m=+1570.705728373" lastFinishedPulling="2025-12-16 12:22:10.114530821 +0000 UTC m=+1603.832788626" observedRunningTime="2025-12-16 12:22:11.081966616 +0000 UTC m=+1604.800224441" watchObservedRunningTime="2025-12-16 12:22:11.083777378 +0000 UTC m=+1604.802035203" Dec 16 12:22:13 crc kubenswrapper[4805]: I1216 12:22:13.090046 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w6s5" event={"ID":"1bd0a124-bd43-4f11-a762-3089cf100be1","Type":"ContainerStarted","Data":"8ba89c1472f9c86d9468837d721e93d43d9a371706a7bbf9dde3a3b0890b2e0f"} Dec 16 12:22:13 crc kubenswrapper[4805]: I1216 12:22:13.112395 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6w6s5" podStartSLOduration=3.086104842 podStartE2EDuration="11.112375875s" podCreationTimestamp="2025-12-16 12:22:02 +0000 UTC" firstStartedPulling="2025-12-16 12:22:03.999195512 +0000 UTC m=+1597.717453317" lastFinishedPulling="2025-12-16 12:22:12.025466545 +0000 UTC m=+1605.743724350" observedRunningTime="2025-12-16 12:22:13.108305299 +0000 UTC m=+1606.826563104" watchObservedRunningTime="2025-12-16 12:22:13.112375875 +0000 UTC m=+1606.830633690" Dec 16 12:22:21 crc kubenswrapper[4805]: I1216 12:22:21.524014 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:22:21 crc kubenswrapper[4805]: E1216 12:22:21.524784 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:22:22 crc kubenswrapper[4805]: I1216 12:22:22.816452 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:22 crc kubenswrapper[4805]: I1216 12:22:22.816508 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:22 crc kubenswrapper[4805]: I1216 12:22:22.858906 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:23 crc kubenswrapper[4805]: I1216 12:22:23.276287 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:23 crc kubenswrapper[4805]: I1216 12:22:23.335676 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6w6s5"] Dec 16 12:22:24 crc kubenswrapper[4805]: I1216 12:22:24.239242 4805 generic.go:334] "Generic (PLEG): container finished" podID="99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec" containerID="a898b7bd29aed58cfe864c04a72ac7919077f9c41516bc98aaddf730a80dd754" exitCode=0 Dec 16 12:22:24 crc kubenswrapper[4805]: I1216 12:22:24.239330 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" event={"ID":"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec","Type":"ContainerDied","Data":"a898b7bd29aed58cfe864c04a72ac7919077f9c41516bc98aaddf730a80dd754"} Dec 16 12:22:25 crc kubenswrapper[4805]: I1216 12:22:25.248895 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6w6s5" podUID="1bd0a124-bd43-4f11-a762-3089cf100be1" containerName="registry-server" containerID="cri-o://8ba89c1472f9c86d9468837d721e93d43d9a371706a7bbf9dde3a3b0890b2e0f" gracePeriod=2 Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.269598 4805 generic.go:334] "Generic (PLEG): container finished" podID="1bd0a124-bd43-4f11-a762-3089cf100be1" containerID="8ba89c1472f9c86d9468837d721e93d43d9a371706a7bbf9dde3a3b0890b2e0f" exitCode=0 Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.269684 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w6s5" event={"ID":"1bd0a124-bd43-4f11-a762-3089cf100be1","Type":"ContainerDied","Data":"8ba89c1472f9c86d9468837d721e93d43d9a371706a7bbf9dde3a3b0890b2e0f"} Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.430000 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.573977 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d2fg\" (UniqueName: \"kubernetes.io/projected/1bd0a124-bd43-4f11-a762-3089cf100be1-kube-api-access-7d2fg\") pod \"1bd0a124-bd43-4f11-a762-3089cf100be1\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.574152 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-utilities\") pod \"1bd0a124-bd43-4f11-a762-3089cf100be1\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.574202 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-catalog-content\") pod \"1bd0a124-bd43-4f11-a762-3089cf100be1\" (UID: \"1bd0a124-bd43-4f11-a762-3089cf100be1\") " Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.578352 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-utilities" (OuterVolumeSpecName: "utilities") pod "1bd0a124-bd43-4f11-a762-3089cf100be1" (UID: "1bd0a124-bd43-4f11-a762-3089cf100be1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.605688 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd0a124-bd43-4f11-a762-3089cf100be1-kube-api-access-7d2fg" (OuterVolumeSpecName: "kube-api-access-7d2fg") pod "1bd0a124-bd43-4f11-a762-3089cf100be1" (UID: "1bd0a124-bd43-4f11-a762-3089cf100be1"). InnerVolumeSpecName "kube-api-access-7d2fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.663064 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.666910 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bd0a124-bd43-4f11-a762-3089cf100be1" (UID: "1bd0a124-bd43-4f11-a762-3089cf100be1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.681215 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d2fg\" (UniqueName: \"kubernetes.io/projected/1bd0a124-bd43-4f11-a762-3089cf100be1-kube-api-access-7d2fg\") on node \"crc\" DevicePath \"\"" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.681270 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.681285 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd0a124-bd43-4f11-a762-3089cf100be1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.782486 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-ssh-key\") pod \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.782609 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d57xv\" (UniqueName: \"kubernetes.io/projected/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-kube-api-access-d57xv\") pod \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.782686 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-repo-setup-combined-ca-bundle\") pod \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.782763 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-inventory\") pod \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\" (UID: \"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec\") " Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.786319 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-kube-api-access-d57xv" (OuterVolumeSpecName: "kube-api-access-d57xv") pod "99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec" (UID: "99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec"). InnerVolumeSpecName "kube-api-access-d57xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.798341 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec" (UID: "99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.810692 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-inventory" (OuterVolumeSpecName: "inventory") pod "99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec" (UID: "99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.823984 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec" (UID: "99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.886816 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.886882 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d57xv\" (UniqueName: \"kubernetes.io/projected/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-kube-api-access-d57xv\") on node \"crc\" DevicePath \"\"" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.886899 4805 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:22:26 crc kubenswrapper[4805]: I1216 12:22:26.886916 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.279304 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w6s5" event={"ID":"1bd0a124-bd43-4f11-a762-3089cf100be1","Type":"ContainerDied","Data":"36603015cb5d9aa32a4798bd49b1fbb09a5c952e7561367b10f2bb0d2a122a45"} Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.279361 4805 scope.go:117] "RemoveContainer" containerID="8ba89c1472f9c86d9468837d721e93d43d9a371706a7bbf9dde3a3b0890b2e0f" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.279519 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w6s5" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.285358 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" event={"ID":"99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec","Type":"ContainerDied","Data":"33ee75854a6ce95bc7f0454cb2468fc4c1ca20bd6b1edc83c2739e1b1141989e"} Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.285776 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33ee75854a6ce95bc7f0454cb2468fc4c1ca20bd6b1edc83c2739e1b1141989e" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.285594 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.325528 4805 scope.go:117] "RemoveContainer" containerID="cefc9c8a2521dd4fb6995b3c3ddbb077543f1846e177438854d5bedfd77589d3" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.328201 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6w6s5"] Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.338683 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6w6s5"] Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.355813 4805 scope.go:117] "RemoveContainer" containerID="4dd082cbc9b97ec59401584316e3c5d8e9c7551f28b035f353416f69a07a7834" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.766373 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm"] Dec 16 12:22:27 crc kubenswrapper[4805]: E1216 12:22:27.767056 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd0a124-bd43-4f11-a762-3089cf100be1" containerName="extract-content" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.767123 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd0a124-bd43-4f11-a762-3089cf100be1" containerName="extract-content" Dec 16 12:22:27 crc kubenswrapper[4805]: E1216 12:22:27.767210 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd0a124-bd43-4f11-a762-3089cf100be1" containerName="extract-utilities" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.767258 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd0a124-bd43-4f11-a762-3089cf100be1" containerName="extract-utilities" Dec 16 12:22:27 crc kubenswrapper[4805]: E1216 12:22:27.767319 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd0a124-bd43-4f11-a762-3089cf100be1" containerName="registry-server" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.767366 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd0a124-bd43-4f11-a762-3089cf100be1" containerName="registry-server" Dec 16 12:22:27 crc kubenswrapper[4805]: E1216 12:22:27.767430 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.767489 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.767731 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.767795 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bd0a124-bd43-4f11-a762-3089cf100be1" containerName="registry-server" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.768495 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.770892 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.771249 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.771572 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.771876 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.784019 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm"] Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.905099 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-99pfm\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.905238 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-99pfm\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:27 crc kubenswrapper[4805]: I1216 12:22:27.905459 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5nl\" (UniqueName: \"kubernetes.io/projected/40b28c68-1737-4ad9-a361-43581b880c4b-kube-api-access-pp5nl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-99pfm\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:28 crc kubenswrapper[4805]: I1216 12:22:28.007449 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-99pfm\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:28 crc kubenswrapper[4805]: I1216 12:22:28.007517 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-99pfm\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:28 crc kubenswrapper[4805]: I1216 12:22:28.007578 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5nl\" (UniqueName: \"kubernetes.io/projected/40b28c68-1737-4ad9-a361-43581b880c4b-kube-api-access-pp5nl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-99pfm\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:28 crc kubenswrapper[4805]: I1216 12:22:28.015757 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-99pfm\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:28 crc kubenswrapper[4805]: I1216 12:22:28.017634 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-99pfm\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:28 crc kubenswrapper[4805]: I1216 12:22:28.032509 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5nl\" (UniqueName: \"kubernetes.io/projected/40b28c68-1737-4ad9-a361-43581b880c4b-kube-api-access-pp5nl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-99pfm\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:28 crc kubenswrapper[4805]: I1216 12:22:28.118410 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:28 crc kubenswrapper[4805]: I1216 12:22:28.549060 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bd0a124-bd43-4f11-a762-3089cf100be1" path="/var/lib/kubelet/pods/1bd0a124-bd43-4f11-a762-3089cf100be1/volumes" Dec 16 12:22:28 crc kubenswrapper[4805]: I1216 12:22:28.682576 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm"] Dec 16 12:22:28 crc kubenswrapper[4805]: W1216 12:22:28.683580 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40b28c68_1737_4ad9_a361_43581b880c4b.slice/crio-66ec137af84cf536ee001ca924f6dc7686c72a6c1974fe2d3bc739f9a4b569fc WatchSource:0}: Error finding container 66ec137af84cf536ee001ca924f6dc7686c72a6c1974fe2d3bc739f9a4b569fc: Status 404 returned error can't find the container with id 66ec137af84cf536ee001ca924f6dc7686c72a6c1974fe2d3bc739f9a4b569fc Dec 16 12:22:29 crc kubenswrapper[4805]: I1216 12:22:29.305893 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" event={"ID":"40b28c68-1737-4ad9-a361-43581b880c4b","Type":"ContainerStarted","Data":"66ec137af84cf536ee001ca924f6dc7686c72a6c1974fe2d3bc739f9a4b569fc"} Dec 16 12:22:30 crc kubenswrapper[4805]: I1216 12:22:30.328765 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" event={"ID":"40b28c68-1737-4ad9-a361-43581b880c4b","Type":"ContainerStarted","Data":"7f823151087e91c2c98a54c4474af9e1549917ed94fdaf672befdbce172212cb"} Dec 16 12:22:33 crc kubenswrapper[4805]: I1216 12:22:33.353855 4805 generic.go:334] "Generic (PLEG): container finished" podID="40b28c68-1737-4ad9-a361-43581b880c4b" containerID="7f823151087e91c2c98a54c4474af9e1549917ed94fdaf672befdbce172212cb" exitCode=0 Dec 16 12:22:33 crc kubenswrapper[4805]: I1216 12:22:33.354227 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" event={"ID":"40b28c68-1737-4ad9-a361-43581b880c4b","Type":"ContainerDied","Data":"7f823151087e91c2c98a54c4474af9e1549917ed94fdaf672befdbce172212cb"} Dec 16 12:22:33 crc kubenswrapper[4805]: I1216 12:22:33.524067 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:22:33 crc kubenswrapper[4805]: E1216 12:22:33.524337 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:22:34 crc kubenswrapper[4805]: I1216 12:22:34.815896 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:34 crc kubenswrapper[4805]: I1216 12:22:34.979981 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-inventory\") pod \"40b28c68-1737-4ad9-a361-43581b880c4b\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " Dec 16 12:22:34 crc kubenswrapper[4805]: I1216 12:22:34.980263 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp5nl\" (UniqueName: \"kubernetes.io/projected/40b28c68-1737-4ad9-a361-43581b880c4b-kube-api-access-pp5nl\") pod \"40b28c68-1737-4ad9-a361-43581b880c4b\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " Dec 16 12:22:34 crc kubenswrapper[4805]: I1216 12:22:34.980838 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-ssh-key\") pod \"40b28c68-1737-4ad9-a361-43581b880c4b\" (UID: \"40b28c68-1737-4ad9-a361-43581b880c4b\") " Dec 16 12:22:34 crc kubenswrapper[4805]: I1216 12:22:34.985228 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b28c68-1737-4ad9-a361-43581b880c4b-kube-api-access-pp5nl" (OuterVolumeSpecName: "kube-api-access-pp5nl") pod "40b28c68-1737-4ad9-a361-43581b880c4b" (UID: "40b28c68-1737-4ad9-a361-43581b880c4b"). InnerVolumeSpecName "kube-api-access-pp5nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.017403 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "40b28c68-1737-4ad9-a361-43581b880c4b" (UID: "40b28c68-1737-4ad9-a361-43581b880c4b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.023677 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-inventory" (OuterVolumeSpecName: "inventory") pod "40b28c68-1737-4ad9-a361-43581b880c4b" (UID: "40b28c68-1737-4ad9-a361-43581b880c4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.084329 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.084380 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40b28c68-1737-4ad9-a361-43581b880c4b-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.084399 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp5nl\" (UniqueName: \"kubernetes.io/projected/40b28c68-1737-4ad9-a361-43581b880c4b-kube-api-access-pp5nl\") on node \"crc\" DevicePath \"\"" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.373472 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" event={"ID":"40b28c68-1737-4ad9-a361-43581b880c4b","Type":"ContainerDied","Data":"66ec137af84cf536ee001ca924f6dc7686c72a6c1974fe2d3bc739f9a4b569fc"} Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.373511 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66ec137af84cf536ee001ca924f6dc7686c72a6c1974fe2d3bc739f9a4b569fc" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.373562 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-99pfm" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.478006 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb"] Dec 16 12:22:35 crc kubenswrapper[4805]: E1216 12:22:35.478533 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b28c68-1737-4ad9-a361-43581b880c4b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.478557 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b28c68-1737-4ad9-a361-43581b880c4b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.478824 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b28c68-1737-4ad9-a361-43581b880c4b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.479702 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.481991 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.486917 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.492792 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.487090 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.509402 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb"] Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.615435 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.615903 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.616062 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.616277 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lts8s\" (UniqueName: \"kubernetes.io/projected/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-kube-api-access-lts8s\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.718437 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.718563 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.718623 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lts8s\" (UniqueName: \"kubernetes.io/projected/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-kube-api-access-lts8s\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.718771 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.724773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.727214 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.751411 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.769944 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lts8s\" (UniqueName: \"kubernetes.io/projected/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-kube-api-access-lts8s\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:35 crc kubenswrapper[4805]: I1216 12:22:35.803701 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:22:36 crc kubenswrapper[4805]: I1216 12:22:36.483054 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb"] Dec 16 12:22:36 crc kubenswrapper[4805]: I1216 12:22:36.484402 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 12:22:37 crc kubenswrapper[4805]: I1216 12:22:37.395519 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" event={"ID":"d0396b0a-2aae-4507-a31e-cbd5f936f3eb","Type":"ContainerStarted","Data":"f6642e0bd0fc15e40116d385fea18c63462b165ea7894a1c168a1ff41f4ec817"} Dec 16 12:22:37 crc kubenswrapper[4805]: I1216 12:22:37.395928 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" event={"ID":"d0396b0a-2aae-4507-a31e-cbd5f936f3eb","Type":"ContainerStarted","Data":"a266f2300b2af9804435d127d89cf5fc9efd4dec127dd15e84199b94f2f4e2ab"} Dec 16 12:22:37 crc kubenswrapper[4805]: I1216 12:22:37.420646 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" podStartSLOduration=1.821359033 podStartE2EDuration="2.420620902s" podCreationTimestamp="2025-12-16 12:22:35 +0000 UTC" firstStartedPulling="2025-12-16 12:22:36.4839626 +0000 UTC m=+1630.202220405" lastFinishedPulling="2025-12-16 12:22:37.083224479 +0000 UTC m=+1630.801482274" observedRunningTime="2025-12-16 12:22:37.413988902 +0000 UTC m=+1631.132246717" watchObservedRunningTime="2025-12-16 12:22:37.420620902 +0000 UTC m=+1631.138878717" Dec 16 12:22:48 crc kubenswrapper[4805]: I1216 12:22:48.522779 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:22:48 crc kubenswrapper[4805]: E1216 12:22:48.526033 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.338247 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z2rjp"] Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.341080 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.355941 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2rjp"] Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.409681 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-utilities\") pod \"redhat-marketplace-z2rjp\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.409763 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-catalog-content\") pod \"redhat-marketplace-z2rjp\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.409861 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4bfd\" (UniqueName: \"kubernetes.io/projected/1fb1d6f2-253b-4846-994c-362e7d90c1e8-kube-api-access-c4bfd\") pod \"redhat-marketplace-z2rjp\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.511221 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4bfd\" (UniqueName: \"kubernetes.io/projected/1fb1d6f2-253b-4846-994c-362e7d90c1e8-kube-api-access-c4bfd\") pod \"redhat-marketplace-z2rjp\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.511395 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-utilities\") pod \"redhat-marketplace-z2rjp\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.511418 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-catalog-content\") pod \"redhat-marketplace-z2rjp\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.511977 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-utilities\") pod \"redhat-marketplace-z2rjp\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.512008 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-catalog-content\") pod \"redhat-marketplace-z2rjp\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.530308 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4bfd\" (UniqueName: \"kubernetes.io/projected/1fb1d6f2-253b-4846-994c-362e7d90c1e8-kube-api-access-c4bfd\") pod \"redhat-marketplace-z2rjp\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:22:51 crc kubenswrapper[4805]: I1216 12:22:51.671214 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:22:52 crc kubenswrapper[4805]: I1216 12:22:52.173794 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2rjp"] Dec 16 12:22:52 crc kubenswrapper[4805]: W1216 12:22:52.179297 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fb1d6f2_253b_4846_994c_362e7d90c1e8.slice/crio-2f58c931ff9a8031552c32fb660de5eecbff324a055e53a757ac5972fd9c391f WatchSource:0}: Error finding container 2f58c931ff9a8031552c32fb660de5eecbff324a055e53a757ac5972fd9c391f: Status 404 returned error can't find the container with id 2f58c931ff9a8031552c32fb660de5eecbff324a055e53a757ac5972fd9c391f Dec 16 12:22:52 crc kubenswrapper[4805]: I1216 12:22:52.564611 4805 generic.go:334] "Generic (PLEG): container finished" podID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" containerID="07e15df48e35dc3d2fba39eeb2e899fd5e3be46fc0dd1c8209e0652a34bbb670" exitCode=0 Dec 16 12:22:52 crc kubenswrapper[4805]: I1216 12:22:52.564728 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2rjp" event={"ID":"1fb1d6f2-253b-4846-994c-362e7d90c1e8","Type":"ContainerDied","Data":"07e15df48e35dc3d2fba39eeb2e899fd5e3be46fc0dd1c8209e0652a34bbb670"} Dec 16 12:22:52 crc kubenswrapper[4805]: I1216 12:22:52.564967 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2rjp" event={"ID":"1fb1d6f2-253b-4846-994c-362e7d90c1e8","Type":"ContainerStarted","Data":"2f58c931ff9a8031552c32fb660de5eecbff324a055e53a757ac5972fd9c391f"} Dec 16 12:22:54 crc kubenswrapper[4805]: I1216 12:22:54.850251 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2rjp" event={"ID":"1fb1d6f2-253b-4846-994c-362e7d90c1e8","Type":"ContainerStarted","Data":"bcfee8821a6b931bfd4dbf2deeb15ce8b138e08aba6079a140749c5ca49385d8"} Dec 16 12:22:55 crc kubenswrapper[4805]: I1216 12:22:55.861609 4805 generic.go:334] "Generic (PLEG): container finished" podID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" containerID="bcfee8821a6b931bfd4dbf2deeb15ce8b138e08aba6079a140749c5ca49385d8" exitCode=0 Dec 16 12:22:55 crc kubenswrapper[4805]: I1216 12:22:55.861666 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2rjp" event={"ID":"1fb1d6f2-253b-4846-994c-362e7d90c1e8","Type":"ContainerDied","Data":"bcfee8821a6b931bfd4dbf2deeb15ce8b138e08aba6079a140749c5ca49385d8"} Dec 16 12:22:56 crc kubenswrapper[4805]: I1216 12:22:56.874259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2rjp" event={"ID":"1fb1d6f2-253b-4846-994c-362e7d90c1e8","Type":"ContainerStarted","Data":"bcef3d72f4e4f1247990e912655f38ac9389323ca4c4bae21c1d18860edd68b0"} Dec 16 12:22:56 crc kubenswrapper[4805]: I1216 12:22:56.894655 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z2rjp" podStartSLOduration=2.177991917 podStartE2EDuration="5.894629298s" podCreationTimestamp="2025-12-16 12:22:51 +0000 UTC" firstStartedPulling="2025-12-16 12:22:52.566308421 +0000 UTC m=+1646.284566226" lastFinishedPulling="2025-12-16 12:22:56.282945792 +0000 UTC m=+1650.001203607" observedRunningTime="2025-12-16 12:22:56.891848578 +0000 UTC m=+1650.610106383" watchObservedRunningTime="2025-12-16 12:22:56.894629298 +0000 UTC m=+1650.612887113" Dec 16 12:22:58 crc kubenswrapper[4805]: I1216 12:22:58.736779 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nr82b"] Dec 16 12:22:58 crc kubenswrapper[4805]: I1216 12:22:58.739714 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:22:58 crc kubenswrapper[4805]: I1216 12:22:58.755480 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nr82b"] Dec 16 12:22:58 crc kubenswrapper[4805]: I1216 12:22:58.879491 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-catalog-content\") pod \"certified-operators-nr82b\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:22:58 crc kubenswrapper[4805]: I1216 12:22:58.879640 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-utilities\") pod \"certified-operators-nr82b\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:22:58 crc kubenswrapper[4805]: I1216 12:22:58.879691 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4dmz\" (UniqueName: \"kubernetes.io/projected/32c2580b-5472-430d-a723-926712998a6e-kube-api-access-k4dmz\") pod \"certified-operators-nr82b\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:22:58 crc kubenswrapper[4805]: I1216 12:22:58.981836 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-catalog-content\") pod \"certified-operators-nr82b\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:22:58 crc kubenswrapper[4805]: I1216 12:22:58.981952 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-utilities\") pod \"certified-operators-nr82b\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:22:58 crc kubenswrapper[4805]: I1216 12:22:58.981990 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4dmz\" (UniqueName: \"kubernetes.io/projected/32c2580b-5472-430d-a723-926712998a6e-kube-api-access-k4dmz\") pod \"certified-operators-nr82b\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:22:58 crc kubenswrapper[4805]: I1216 12:22:58.982687 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-catalog-content\") pod \"certified-operators-nr82b\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:22:58 crc kubenswrapper[4805]: I1216 12:22:58.982774 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-utilities\") pod \"certified-operators-nr82b\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:22:59 crc kubenswrapper[4805]: I1216 12:22:59.009081 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4dmz\" (UniqueName: \"kubernetes.io/projected/32c2580b-5472-430d-a723-926712998a6e-kube-api-access-k4dmz\") pod \"certified-operators-nr82b\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:22:59 crc kubenswrapper[4805]: I1216 12:22:59.070897 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:22:59 crc kubenswrapper[4805]: I1216 12:22:59.522883 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:22:59 crc kubenswrapper[4805]: E1216 12:22:59.523405 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:22:59 crc kubenswrapper[4805]: I1216 12:22:59.586189 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nr82b"] Dec 16 12:22:59 crc kubenswrapper[4805]: I1216 12:22:59.902060 4805 generic.go:334] "Generic (PLEG): container finished" podID="32c2580b-5472-430d-a723-926712998a6e" containerID="6d2dfd08c728c3f3fc00d66e6eaf838db75696bc0c2e39b94e5a344ced7feae5" exitCode=0 Dec 16 12:22:59 crc kubenswrapper[4805]: I1216 12:22:59.902117 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr82b" event={"ID":"32c2580b-5472-430d-a723-926712998a6e","Type":"ContainerDied","Data":"6d2dfd08c728c3f3fc00d66e6eaf838db75696bc0c2e39b94e5a344ced7feae5"} Dec 16 12:22:59 crc kubenswrapper[4805]: I1216 12:22:59.902169 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr82b" event={"ID":"32c2580b-5472-430d-a723-926712998a6e","Type":"ContainerStarted","Data":"f63a2532855717355391ccb62dacfe7d59967b126942f9c8d6f37c28891da7a9"} Dec 16 12:23:01 crc kubenswrapper[4805]: I1216 12:23:01.671386 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:23:01 crc kubenswrapper[4805]: I1216 12:23:01.671880 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:23:01 crc kubenswrapper[4805]: I1216 12:23:01.755266 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:23:01 crc kubenswrapper[4805]: I1216 12:23:01.919793 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr82b" event={"ID":"32c2580b-5472-430d-a723-926712998a6e","Type":"ContainerStarted","Data":"24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae"} Dec 16 12:23:01 crc kubenswrapper[4805]: I1216 12:23:01.966828 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:23:03 crc kubenswrapper[4805]: I1216 12:23:03.713453 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2rjp"] Dec 16 12:23:03 crc kubenswrapper[4805]: I1216 12:23:03.935470 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z2rjp" podUID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" containerName="registry-server" containerID="cri-o://bcef3d72f4e4f1247990e912655f38ac9389323ca4c4bae21c1d18860edd68b0" gracePeriod=2 Dec 16 12:23:04 crc kubenswrapper[4805]: I1216 12:23:04.959680 4805 generic.go:334] "Generic (PLEG): container finished" podID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" containerID="bcef3d72f4e4f1247990e912655f38ac9389323ca4c4bae21c1d18860edd68b0" exitCode=0 Dec 16 12:23:04 crc kubenswrapper[4805]: I1216 12:23:04.959753 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2rjp" event={"ID":"1fb1d6f2-253b-4846-994c-362e7d90c1e8","Type":"ContainerDied","Data":"bcef3d72f4e4f1247990e912655f38ac9389323ca4c4bae21c1d18860edd68b0"} Dec 16 12:23:04 crc kubenswrapper[4805]: I1216 12:23:04.962451 4805 generic.go:334] "Generic (PLEG): container finished" podID="32c2580b-5472-430d-a723-926712998a6e" containerID="24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae" exitCode=0 Dec 16 12:23:04 crc kubenswrapper[4805]: I1216 12:23:04.962500 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr82b" event={"ID":"32c2580b-5472-430d-a723-926712998a6e","Type":"ContainerDied","Data":"24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae"} Dec 16 12:23:05 crc kubenswrapper[4805]: I1216 12:23:05.086287 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:23:05 crc kubenswrapper[4805]: I1216 12:23:05.208386 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4bfd\" (UniqueName: \"kubernetes.io/projected/1fb1d6f2-253b-4846-994c-362e7d90c1e8-kube-api-access-c4bfd\") pod \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " Dec 16 12:23:05 crc kubenswrapper[4805]: I1216 12:23:05.208524 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-utilities\") pod \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " Dec 16 12:23:05 crc kubenswrapper[4805]: I1216 12:23:05.208627 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-catalog-content\") pod \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\" (UID: \"1fb1d6f2-253b-4846-994c-362e7d90c1e8\") " Dec 16 12:23:05 crc kubenswrapper[4805]: I1216 12:23:05.209594 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-utilities" (OuterVolumeSpecName: "utilities") pod "1fb1d6f2-253b-4846-994c-362e7d90c1e8" (UID: "1fb1d6f2-253b-4846-994c-362e7d90c1e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:23:05 crc kubenswrapper[4805]: I1216 12:23:05.214557 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb1d6f2-253b-4846-994c-362e7d90c1e8-kube-api-access-c4bfd" (OuterVolumeSpecName: "kube-api-access-c4bfd") pod "1fb1d6f2-253b-4846-994c-362e7d90c1e8" (UID: "1fb1d6f2-253b-4846-994c-362e7d90c1e8"). InnerVolumeSpecName "kube-api-access-c4bfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:23:05 crc kubenswrapper[4805]: I1216 12:23:05.227288 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fb1d6f2-253b-4846-994c-362e7d90c1e8" (UID: "1fb1d6f2-253b-4846-994c-362e7d90c1e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:23:05 crc kubenswrapper[4805]: I1216 12:23:05.311484 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:23:05 crc kubenswrapper[4805]: I1216 12:23:05.311539 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb1d6f2-253b-4846-994c-362e7d90c1e8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:23:05 crc kubenswrapper[4805]: I1216 12:23:05.311553 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4bfd\" (UniqueName: \"kubernetes.io/projected/1fb1d6f2-253b-4846-994c-362e7d90c1e8-kube-api-access-c4bfd\") on node \"crc\" DevicePath \"\"" Dec 16 12:23:06 crc kubenswrapper[4805]: I1216 12:23:06.002719 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr82b" event={"ID":"32c2580b-5472-430d-a723-926712998a6e","Type":"ContainerStarted","Data":"08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa"} Dec 16 12:23:06 crc kubenswrapper[4805]: I1216 12:23:06.008016 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2rjp" event={"ID":"1fb1d6f2-253b-4846-994c-362e7d90c1e8","Type":"ContainerDied","Data":"2f58c931ff9a8031552c32fb660de5eecbff324a055e53a757ac5972fd9c391f"} Dec 16 12:23:06 crc kubenswrapper[4805]: I1216 12:23:06.008073 4805 scope.go:117] "RemoveContainer" containerID="bcef3d72f4e4f1247990e912655f38ac9389323ca4c4bae21c1d18860edd68b0" Dec 16 12:23:06 crc kubenswrapper[4805]: I1216 12:23:06.008088 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2rjp" Dec 16 12:23:06 crc kubenswrapper[4805]: I1216 12:23:06.039948 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nr82b" podStartSLOduration=2.329021237 podStartE2EDuration="8.039923682s" podCreationTimestamp="2025-12-16 12:22:58 +0000 UTC" firstStartedPulling="2025-12-16 12:22:59.903507389 +0000 UTC m=+1653.621765184" lastFinishedPulling="2025-12-16 12:23:05.614409824 +0000 UTC m=+1659.332667629" observedRunningTime="2025-12-16 12:23:06.034722333 +0000 UTC m=+1659.752980138" watchObservedRunningTime="2025-12-16 12:23:06.039923682 +0000 UTC m=+1659.758181497" Dec 16 12:23:06 crc kubenswrapper[4805]: I1216 12:23:06.049492 4805 scope.go:117] "RemoveContainer" containerID="bcfee8821a6b931bfd4dbf2deeb15ce8b138e08aba6079a140749c5ca49385d8" Dec 16 12:23:06 crc kubenswrapper[4805]: I1216 12:23:06.082210 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2rjp"] Dec 16 12:23:06 crc kubenswrapper[4805]: I1216 12:23:06.110505 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2rjp"] Dec 16 12:23:06 crc kubenswrapper[4805]: I1216 12:23:06.138982 4805 scope.go:117] "RemoveContainer" containerID="07e15df48e35dc3d2fba39eeb2e899fd5e3be46fc0dd1c8209e0652a34bbb670" Dec 16 12:23:06 crc kubenswrapper[4805]: I1216 12:23:06.534825 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" path="/var/lib/kubelet/pods/1fb1d6f2-253b-4846-994c-362e7d90c1e8/volumes" Dec 16 12:23:09 crc kubenswrapper[4805]: I1216 12:23:09.071004 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:23:09 crc kubenswrapper[4805]: I1216 12:23:09.074271 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:23:09 crc kubenswrapper[4805]: I1216 12:23:09.135551 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:23:10 crc kubenswrapper[4805]: I1216 12:23:10.123968 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:23:11 crc kubenswrapper[4805]: I1216 12:23:11.311510 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nr82b"] Dec 16 12:23:12 crc kubenswrapper[4805]: I1216 12:23:12.522894 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:23:12 crc kubenswrapper[4805]: E1216 12:23:12.523322 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:23:13 crc kubenswrapper[4805]: I1216 12:23:13.075895 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nr82b" podUID="32c2580b-5472-430d-a723-926712998a6e" containerName="registry-server" containerID="cri-o://08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa" gracePeriod=2 Dec 16 12:23:13 crc kubenswrapper[4805]: I1216 12:23:13.554689 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:23:13 crc kubenswrapper[4805]: I1216 12:23:13.697060 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4dmz\" (UniqueName: \"kubernetes.io/projected/32c2580b-5472-430d-a723-926712998a6e-kube-api-access-k4dmz\") pod \"32c2580b-5472-430d-a723-926712998a6e\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " Dec 16 12:23:13 crc kubenswrapper[4805]: I1216 12:23:13.697124 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-catalog-content\") pod \"32c2580b-5472-430d-a723-926712998a6e\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " Dec 16 12:23:13 crc kubenswrapper[4805]: I1216 12:23:13.697171 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-utilities\") pod \"32c2580b-5472-430d-a723-926712998a6e\" (UID: \"32c2580b-5472-430d-a723-926712998a6e\") " Dec 16 12:23:13 crc kubenswrapper[4805]: I1216 12:23:13.698489 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-utilities" (OuterVolumeSpecName: "utilities") pod "32c2580b-5472-430d-a723-926712998a6e" (UID: "32c2580b-5472-430d-a723-926712998a6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:23:13 crc kubenswrapper[4805]: I1216 12:23:13.702976 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c2580b-5472-430d-a723-926712998a6e-kube-api-access-k4dmz" (OuterVolumeSpecName: "kube-api-access-k4dmz") pod "32c2580b-5472-430d-a723-926712998a6e" (UID: "32c2580b-5472-430d-a723-926712998a6e"). InnerVolumeSpecName "kube-api-access-k4dmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:23:13 crc kubenswrapper[4805]: I1216 12:23:13.744461 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32c2580b-5472-430d-a723-926712998a6e" (UID: "32c2580b-5472-430d-a723-926712998a6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:23:13 crc kubenswrapper[4805]: I1216 12:23:13.799461 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4dmz\" (UniqueName: \"kubernetes.io/projected/32c2580b-5472-430d-a723-926712998a6e-kube-api-access-k4dmz\") on node \"crc\" DevicePath \"\"" Dec 16 12:23:13 crc kubenswrapper[4805]: I1216 12:23:13.799499 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:23:13 crc kubenswrapper[4805]: I1216 12:23:13.799509 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32c2580b-5472-430d-a723-926712998a6e-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.087411 4805 generic.go:334] "Generic (PLEG): container finished" podID="32c2580b-5472-430d-a723-926712998a6e" containerID="08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa" exitCode=0 Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.087460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr82b" event={"ID":"32c2580b-5472-430d-a723-926712998a6e","Type":"ContainerDied","Data":"08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa"} Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.087496 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nr82b" event={"ID":"32c2580b-5472-430d-a723-926712998a6e","Type":"ContainerDied","Data":"f63a2532855717355391ccb62dacfe7d59967b126942f9c8d6f37c28891da7a9"} Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.087516 4805 scope.go:117] "RemoveContainer" containerID="08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa" Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.087521 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nr82b" Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.117449 4805 scope.go:117] "RemoveContainer" containerID="24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae" Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.144811 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nr82b"] Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.161098 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nr82b"] Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.161920 4805 scope.go:117] "RemoveContainer" containerID="6d2dfd08c728c3f3fc00d66e6eaf838db75696bc0c2e39b94e5a344ced7feae5" Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.188410 4805 scope.go:117] "RemoveContainer" containerID="08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa" Dec 16 12:23:14 crc kubenswrapper[4805]: E1216 12:23:14.188922 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa\": container with ID starting with 08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa not found: ID does not exist" containerID="08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa" Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.188953 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa"} err="failed to get container status \"08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa\": rpc error: code = NotFound desc = could not find container \"08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa\": container with ID starting with 08e4d194978ab195fa1eed9a8f55306b5f5f23aa190d4d8dd3c14e608b879ffa not found: ID does not exist" Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.188991 4805 scope.go:117] "RemoveContainer" containerID="24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae" Dec 16 12:23:14 crc kubenswrapper[4805]: E1216 12:23:14.189358 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae\": container with ID starting with 24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae not found: ID does not exist" containerID="24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae" Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.189386 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae"} err="failed to get container status \"24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae\": rpc error: code = NotFound desc = could not find container \"24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae\": container with ID starting with 24e09863d17d359467a0547bf3ddaeb805de2547f0367973dd09014d554004ae not found: ID does not exist" Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.189405 4805 scope.go:117] "RemoveContainer" containerID="6d2dfd08c728c3f3fc00d66e6eaf838db75696bc0c2e39b94e5a344ced7feae5" Dec 16 12:23:14 crc kubenswrapper[4805]: E1216 12:23:14.189734 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2dfd08c728c3f3fc00d66e6eaf838db75696bc0c2e39b94e5a344ced7feae5\": container with ID starting with 6d2dfd08c728c3f3fc00d66e6eaf838db75696bc0c2e39b94e5a344ced7feae5 not found: ID does not exist" containerID="6d2dfd08c728c3f3fc00d66e6eaf838db75696bc0c2e39b94e5a344ced7feae5" Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.189775 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2dfd08c728c3f3fc00d66e6eaf838db75696bc0c2e39b94e5a344ced7feae5"} err="failed to get container status \"6d2dfd08c728c3f3fc00d66e6eaf838db75696bc0c2e39b94e5a344ced7feae5\": rpc error: code = NotFound desc = could not find container \"6d2dfd08c728c3f3fc00d66e6eaf838db75696bc0c2e39b94e5a344ced7feae5\": container with ID starting with 6d2dfd08c728c3f3fc00d66e6eaf838db75696bc0c2e39b94e5a344ced7feae5 not found: ID does not exist" Dec 16 12:23:14 crc kubenswrapper[4805]: I1216 12:23:14.563822 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32c2580b-5472-430d-a723-926712998a6e" path="/var/lib/kubelet/pods/32c2580b-5472-430d-a723-926712998a6e/volumes" Dec 16 12:23:24 crc kubenswrapper[4805]: I1216 12:23:24.522685 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:23:24 crc kubenswrapper[4805]: E1216 12:23:24.523543 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:23:37 crc kubenswrapper[4805]: I1216 12:23:37.522943 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:23:37 crc kubenswrapper[4805]: E1216 12:23:37.523614 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:23:51 crc kubenswrapper[4805]: I1216 12:23:51.522868 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:23:51 crc kubenswrapper[4805]: E1216 12:23:51.524589 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:24:03 crc kubenswrapper[4805]: I1216 12:24:03.523855 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:24:03 crc kubenswrapper[4805]: E1216 12:24:03.524680 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:24:14 crc kubenswrapper[4805]: I1216 12:24:14.061430 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-b6p7d"] Dec 16 12:24:14 crc kubenswrapper[4805]: I1216 12:24:14.077137 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-b6p7d"] Dec 16 12:24:14 crc kubenswrapper[4805]: I1216 12:24:14.541573 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70a53daa-fa15-46b2-b365-bca57d860620" path="/var/lib/kubelet/pods/70a53daa-fa15-46b2-b365-bca57d860620/volumes" Dec 16 12:24:15 crc kubenswrapper[4805]: I1216 12:24:15.050649 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6fqc4"] Dec 16 12:24:15 crc kubenswrapper[4805]: I1216 12:24:15.064934 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6fqc4"] Dec 16 12:24:16 crc kubenswrapper[4805]: I1216 12:24:16.529958 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:24:16 crc kubenswrapper[4805]: E1216 12:24:16.530695 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:24:16 crc kubenswrapper[4805]: I1216 12:24:16.539442 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b651a61-b4a2-45c6-a349-fde447509d3c" path="/var/lib/kubelet/pods/3b651a61-b4a2-45c6-a349-fde447509d3c/volumes" Dec 16 12:24:21 crc kubenswrapper[4805]: I1216 12:24:21.030082 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pk8zf"] Dec 16 12:24:21 crc kubenswrapper[4805]: I1216 12:24:21.040498 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pk8zf"] Dec 16 12:24:22 crc kubenswrapper[4805]: I1216 12:24:22.540079 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22ff15b5-9ba4-47c0-87a0-7d4400ce9e17" path="/var/lib/kubelet/pods/22ff15b5-9ba4-47c0-87a0-7d4400ce9e17/volumes" Dec 16 12:24:25 crc kubenswrapper[4805]: I1216 12:24:25.035348 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-546b-account-create-mx6tl"] Dec 16 12:24:25 crc kubenswrapper[4805]: I1216 12:24:25.048057 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-546b-account-create-mx6tl"] Dec 16 12:24:26 crc kubenswrapper[4805]: I1216 12:24:26.038393 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1415-account-create-svvgs"] Dec 16 12:24:26 crc kubenswrapper[4805]: I1216 12:24:26.051406 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1415-account-create-svvgs"] Dec 16 12:24:26 crc kubenswrapper[4805]: I1216 12:24:26.537536 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="327fe989-8d12-4624-adb2-d2fe4680168c" path="/var/lib/kubelet/pods/327fe989-8d12-4624-adb2-d2fe4680168c/volumes" Dec 16 12:24:26 crc kubenswrapper[4805]: I1216 12:24:26.538476 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d0d1a8-0eb3-4d1d-899a-e1441332e34d" path="/var/lib/kubelet/pods/42d0d1a8-0eb3-4d1d-899a-e1441332e34d/volumes" Dec 16 12:24:29 crc kubenswrapper[4805]: I1216 12:24:29.523946 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:24:29 crc kubenswrapper[4805]: E1216 12:24:29.524585 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:24:31 crc kubenswrapper[4805]: I1216 12:24:31.039193 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1042-account-create-mxznj"] Dec 16 12:24:31 crc kubenswrapper[4805]: I1216 12:24:31.048319 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1042-account-create-mxznj"] Dec 16 12:24:32 crc kubenswrapper[4805]: I1216 12:24:32.540336 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49dfee39-2eca-4686-ab17-fb6e4ccaf226" path="/var/lib/kubelet/pods/49dfee39-2eca-4686-ab17-fb6e4ccaf226/volumes" Dec 16 12:24:44 crc kubenswrapper[4805]: I1216 12:24:44.522964 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:24:44 crc kubenswrapper[4805]: E1216 12:24:44.523751 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:24:50 crc kubenswrapper[4805]: I1216 12:24:50.038203 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-shl6n"] Dec 16 12:24:50 crc kubenswrapper[4805]: I1216 12:24:50.049641 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-shl6n"] Dec 16 12:24:50 crc kubenswrapper[4805]: I1216 12:24:50.061636 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-k5str"] Dec 16 12:24:50 crc kubenswrapper[4805]: I1216 12:24:50.069402 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-k5str"] Dec 16 12:24:50 crc kubenswrapper[4805]: I1216 12:24:50.077086 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-tdldw"] Dec 16 12:24:50 crc kubenswrapper[4805]: I1216 12:24:50.084357 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-tdldw"] Dec 16 12:24:50 crc kubenswrapper[4805]: I1216 12:24:50.533574 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2013484d-d204-43f8-a923-7ec541851696" path="/var/lib/kubelet/pods/2013484d-d204-43f8-a923-7ec541851696/volumes" Dec 16 12:24:50 crc kubenswrapper[4805]: I1216 12:24:50.534498 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd974ee-1177-4f7e-8c86-63aec6f9c86e" path="/var/lib/kubelet/pods/3fd974ee-1177-4f7e-8c86-63aec6f9c86e/volumes" Dec 16 12:24:50 crc kubenswrapper[4805]: I1216 12:24:50.535026 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7de0415-ad51-47a8-965e-fe16b2d9af9c" path="/var/lib/kubelet/pods/a7de0415-ad51-47a8-965e-fe16b2d9af9c/volumes" Dec 16 12:24:56 crc kubenswrapper[4805]: I1216 12:24:56.241186 4805 scope.go:117] "RemoveContainer" containerID="68782eb1bd58b57a8b3a20e2f973f63e4b28e322b87bb3e215451f59b631e0c2" Dec 16 12:24:56 crc kubenswrapper[4805]: I1216 12:24:56.741210 4805 scope.go:117] "RemoveContainer" containerID="736c3119475f389e96206fca8225715dc5a2bc2dd0fa72d966188bd243fc3bf9" Dec 16 12:24:56 crc kubenswrapper[4805]: I1216 12:24:56.773834 4805 scope.go:117] "RemoveContainer" containerID="cd4ba4b3ef99a38fca5cba1c2ed06a07f724adea8b4f38fddc801cf66d9dd4d5" Dec 16 12:24:56 crc kubenswrapper[4805]: I1216 12:24:56.821799 4805 scope.go:117] "RemoveContainer" containerID="8abee5f9389d4d68a11f937138fc5d7fea8b649f5ff7e4b00d4b1e4f4e9eb7e1" Dec 16 12:24:56 crc kubenswrapper[4805]: I1216 12:24:56.866464 4805 scope.go:117] "RemoveContainer" containerID="199b6e47cf4a7f85b91da8362f267238e4568e4147890235d02eeebb174b4418" Dec 16 12:24:57 crc kubenswrapper[4805]: I1216 12:24:57.017239 4805 scope.go:117] "RemoveContainer" containerID="dac27f968d0db7cb9302edf8b58ccad2a8f13e5c3ac2c34c3aa0082667093ed9" Dec 16 12:24:57 crc kubenswrapper[4805]: I1216 12:24:57.044859 4805 scope.go:117] "RemoveContainer" containerID="9dff9f6b161586603998e6cdb1c926d338bcdf68fd6912ae7406c084b60a6f42" Dec 16 12:24:57 crc kubenswrapper[4805]: I1216 12:24:57.064943 4805 scope.go:117] "RemoveContainer" containerID="29ccad7a5e65257ba490efa4bf6f5768e68751a2c1dedbe93844d47589aba742" Dec 16 12:24:57 crc kubenswrapper[4805]: I1216 12:24:57.083334 4805 scope.go:117] "RemoveContainer" containerID="15a1ebed4365cad53c6a401f1c678d67e23548745afed1a5a25d6807546e3cf4" Dec 16 12:24:58 crc kubenswrapper[4805]: I1216 12:24:58.523542 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:24:58 crc kubenswrapper[4805]: E1216 12:24:58.524253 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:25:03 crc kubenswrapper[4805]: I1216 12:25:03.157831 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-514d-account-create-2rk67"] Dec 16 12:25:03 crc kubenswrapper[4805]: I1216 12:25:03.179396 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c9e7-account-create-5fpb9"] Dec 16 12:25:03 crc kubenswrapper[4805]: I1216 12:25:03.199392 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3d76-account-create-npbtw"] Dec 16 12:25:03 crc kubenswrapper[4805]: I1216 12:25:03.209153 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c9e7-account-create-5fpb9"] Dec 16 12:25:03 crc kubenswrapper[4805]: I1216 12:25:03.221375 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-514d-account-create-2rk67"] Dec 16 12:25:03 crc kubenswrapper[4805]: I1216 12:25:03.229835 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3d76-account-create-npbtw"] Dec 16 12:25:04 crc kubenswrapper[4805]: I1216 12:25:04.534901 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47d14a2a-df4a-45be-b675-9a69ed1e8d45" path="/var/lib/kubelet/pods/47d14a2a-df4a-45be-b675-9a69ed1e8d45/volumes" Dec 16 12:25:04 crc kubenswrapper[4805]: I1216 12:25:04.535906 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66f57d62-e317-4e4b-ac87-b56d6ece0564" path="/var/lib/kubelet/pods/66f57d62-e317-4e4b-ac87-b56d6ece0564/volumes" Dec 16 12:25:04 crc kubenswrapper[4805]: I1216 12:25:04.536680 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3a83717-7d8b-4175-82db-106b634368b0" path="/var/lib/kubelet/pods/a3a83717-7d8b-4175-82db-106b634368b0/volumes" Dec 16 12:25:07 crc kubenswrapper[4805]: I1216 12:25:07.039161 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qckpn"] Dec 16 12:25:07 crc kubenswrapper[4805]: I1216 12:25:07.054646 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qckpn"] Dec 16 12:25:08 crc kubenswrapper[4805]: I1216 12:25:08.539137 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60b01ba-b205-4451-8a28-33dfb9845ff2" path="/var/lib/kubelet/pods/b60b01ba-b205-4451-8a28-33dfb9845ff2/volumes" Dec 16 12:25:12 crc kubenswrapper[4805]: I1216 12:25:12.522653 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:25:12 crc kubenswrapper[4805]: E1216 12:25:12.523915 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:25:16 crc kubenswrapper[4805]: I1216 12:25:16.035317 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-x5dc8"] Dec 16 12:25:16 crc kubenswrapper[4805]: I1216 12:25:16.045022 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-x5dc8"] Dec 16 12:25:16 crc kubenswrapper[4805]: I1216 12:25:16.534112 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5fec641-bdba-4f8a-b6bf-d13721a860d2" path="/var/lib/kubelet/pods/c5fec641-bdba-4f8a-b6bf-d13721a860d2/volumes" Dec 16 12:25:26 crc kubenswrapper[4805]: I1216 12:25:26.528520 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:25:26 crc kubenswrapper[4805]: E1216 12:25:26.529243 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:25:40 crc kubenswrapper[4805]: I1216 12:25:40.525290 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:25:40 crc kubenswrapper[4805]: E1216 12:25:40.526070 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:25:55 crc kubenswrapper[4805]: I1216 12:25:55.522861 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:25:55 crc kubenswrapper[4805]: E1216 12:25:55.523694 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:25:57 crc kubenswrapper[4805]: I1216 12:25:57.302975 4805 scope.go:117] "RemoveContainer" containerID="56c3d96d5c34655ac8e4ef1fafa79029a2d8de902b0fbe32a7e7893b6112742b" Dec 16 12:25:57 crc kubenswrapper[4805]: I1216 12:25:57.325808 4805 scope.go:117] "RemoveContainer" containerID="140a39ad1a22edb05009c51b3050ff9138f2740544805dda33f32fa10cc5a526" Dec 16 12:25:57 crc kubenswrapper[4805]: I1216 12:25:57.376797 4805 scope.go:117] "RemoveContainer" containerID="bd53f6966de5580751d1978ea6bec3d5767d6fab88d894e29519f2f913a794f8" Dec 16 12:25:57 crc kubenswrapper[4805]: I1216 12:25:57.419606 4805 scope.go:117] "RemoveContainer" containerID="049de569c4c845d1fb82d31f4b08259fa6f699761843bf03f197d811f621389c" Dec 16 12:25:57 crc kubenswrapper[4805]: I1216 12:25:57.469647 4805 scope.go:117] "RemoveContainer" containerID="8236a7a863a73ca414f133a79af5c6335a5ab2bfefb38c3305d212f6c1009298" Dec 16 12:26:04 crc kubenswrapper[4805]: I1216 12:26:04.078347 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kfrrc"] Dec 16 12:26:04 crc kubenswrapper[4805]: I1216 12:26:04.094854 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kfrrc"] Dec 16 12:26:04 crc kubenswrapper[4805]: I1216 12:26:04.533912 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51afacd6-c090-45db-aa5c-da8b53734401" path="/var/lib/kubelet/pods/51afacd6-c090-45db-aa5c-da8b53734401/volumes" Dec 16 12:26:07 crc kubenswrapper[4805]: I1216 12:26:07.523274 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:26:07 crc kubenswrapper[4805]: E1216 12:26:07.524070 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:26:20 crc kubenswrapper[4805]: I1216 12:26:20.524977 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:26:20 crc kubenswrapper[4805]: E1216 12:26:20.525905 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:26:21 crc kubenswrapper[4805]: I1216 12:26:21.234587 4805 generic.go:334] "Generic (PLEG): container finished" podID="d0396b0a-2aae-4507-a31e-cbd5f936f3eb" containerID="f6642e0bd0fc15e40116d385fea18c63462b165ea7894a1c168a1ff41f4ec817" exitCode=0 Dec 16 12:26:21 crc kubenswrapper[4805]: I1216 12:26:21.234910 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" event={"ID":"d0396b0a-2aae-4507-a31e-cbd5f936f3eb","Type":"ContainerDied","Data":"f6642e0bd0fc15e40116d385fea18c63462b165ea7894a1c168a1ff41f4ec817"} Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.698245 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.760166 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lts8s\" (UniqueName: \"kubernetes.io/projected/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-kube-api-access-lts8s\") pod \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.760234 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-inventory\") pod \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.760267 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-bootstrap-combined-ca-bundle\") pod \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.760301 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-ssh-key\") pod \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\" (UID: \"d0396b0a-2aae-4507-a31e-cbd5f936f3eb\") " Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.766390 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-kube-api-access-lts8s" (OuterVolumeSpecName: "kube-api-access-lts8s") pod "d0396b0a-2aae-4507-a31e-cbd5f936f3eb" (UID: "d0396b0a-2aae-4507-a31e-cbd5f936f3eb"). InnerVolumeSpecName "kube-api-access-lts8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.766449 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d0396b0a-2aae-4507-a31e-cbd5f936f3eb" (UID: "d0396b0a-2aae-4507-a31e-cbd5f936f3eb"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.790122 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d0396b0a-2aae-4507-a31e-cbd5f936f3eb" (UID: "d0396b0a-2aae-4507-a31e-cbd5f936f3eb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.792518 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-inventory" (OuterVolumeSpecName: "inventory") pod "d0396b0a-2aae-4507-a31e-cbd5f936f3eb" (UID: "d0396b0a-2aae-4507-a31e-cbd5f936f3eb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.863189 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.863229 4805 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.863238 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:26:22 crc kubenswrapper[4805]: I1216 12:26:22.863247 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lts8s\" (UniqueName: \"kubernetes.io/projected/d0396b0a-2aae-4507-a31e-cbd5f936f3eb-kube-api-access-lts8s\") on node \"crc\" DevicePath \"\"" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.252929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" event={"ID":"d0396b0a-2aae-4507-a31e-cbd5f936f3eb","Type":"ContainerDied","Data":"a266f2300b2af9804435d127d89cf5fc9efd4dec127dd15e84199b94f2f4e2ab"} Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.252987 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a266f2300b2af9804435d127d89cf5fc9efd4dec127dd15e84199b94f2f4e2ab" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.252993 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.352317 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9"] Dec 16 12:26:23 crc kubenswrapper[4805]: E1216 12:26:23.353334 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" containerName="extract-content" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.353503 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" containerName="extract-content" Dec 16 12:26:23 crc kubenswrapper[4805]: E1216 12:26:23.353681 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" containerName="extract-utilities" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.353811 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" containerName="extract-utilities" Dec 16 12:26:23 crc kubenswrapper[4805]: E1216 12:26:23.353994 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0396b0a-2aae-4507-a31e-cbd5f936f3eb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.354123 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0396b0a-2aae-4507-a31e-cbd5f936f3eb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 16 12:26:23 crc kubenswrapper[4805]: E1216 12:26:23.354286 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c2580b-5472-430d-a723-926712998a6e" containerName="registry-server" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.354427 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c2580b-5472-430d-a723-926712998a6e" containerName="registry-server" Dec 16 12:26:23 crc kubenswrapper[4805]: E1216 12:26:23.354581 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c2580b-5472-430d-a723-926712998a6e" containerName="extract-content" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.354704 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c2580b-5472-430d-a723-926712998a6e" containerName="extract-content" Dec 16 12:26:23 crc kubenswrapper[4805]: E1216 12:26:23.354857 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" containerName="registry-server" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.354978 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" containerName="registry-server" Dec 16 12:26:23 crc kubenswrapper[4805]: E1216 12:26:23.355109 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c2580b-5472-430d-a723-926712998a6e" containerName="extract-utilities" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.355268 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c2580b-5472-430d-a723-926712998a6e" containerName="extract-utilities" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.355745 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0396b0a-2aae-4507-a31e-cbd5f936f3eb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.355926 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c2580b-5472-430d-a723-926712998a6e" containerName="registry-server" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.356067 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb1d6f2-253b-4846-994c-362e7d90c1e8" containerName="registry-server" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.357362 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.359736 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.361128 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.361439 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.362757 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.364220 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9"] Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.378472 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8gwf\" (UniqueName: \"kubernetes.io/projected/d062ee72-cb69-4cdc-93fe-1474435c0904-kube-api-access-n8gwf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.378576 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.378722 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.481258 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8gwf\" (UniqueName: \"kubernetes.io/projected/d062ee72-cb69-4cdc-93fe-1474435c0904-kube-api-access-n8gwf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.481345 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.481420 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.486694 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.489904 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.502064 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8gwf\" (UniqueName: \"kubernetes.io/projected/d062ee72-cb69-4cdc-93fe-1474435c0904-kube-api-access-n8gwf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:26:23 crc kubenswrapper[4805]: I1216 12:26:23.708862 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:26:24 crc kubenswrapper[4805]: I1216 12:26:24.247437 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9"] Dec 16 12:26:24 crc kubenswrapper[4805]: I1216 12:26:24.289989 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" event={"ID":"d062ee72-cb69-4cdc-93fe-1474435c0904","Type":"ContainerStarted","Data":"1e06e5143d72ba52ab31eb91dac21ac9062d06bfb1ccc4539f99bdd963c4543b"} Dec 16 12:26:25 crc kubenswrapper[4805]: I1216 12:26:25.300240 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" event={"ID":"d062ee72-cb69-4cdc-93fe-1474435c0904","Type":"ContainerStarted","Data":"b11107e9dc645a8b968a1816ffb6742659eb7d8d8eee83a175d3f415165b70b6"} Dec 16 12:26:25 crc kubenswrapper[4805]: I1216 12:26:25.326918 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" podStartSLOduration=1.796418612 podStartE2EDuration="2.326899399s" podCreationTimestamp="2025-12-16 12:26:23 +0000 UTC" firstStartedPulling="2025-12-16 12:26:24.252196111 +0000 UTC m=+1857.970453916" lastFinishedPulling="2025-12-16 12:26:24.782676898 +0000 UTC m=+1858.500934703" observedRunningTime="2025-12-16 12:26:25.314404011 +0000 UTC m=+1859.032661816" watchObservedRunningTime="2025-12-16 12:26:25.326899399 +0000 UTC m=+1859.045157214" Dec 16 12:26:35 crc kubenswrapper[4805]: I1216 12:26:35.523246 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:26:35 crc kubenswrapper[4805]: E1216 12:26:35.524116 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:26:37 crc kubenswrapper[4805]: I1216 12:26:37.053085 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-l88fx"] Dec 16 12:26:37 crc kubenswrapper[4805]: I1216 12:26:37.063997 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2th2m"] Dec 16 12:26:37 crc kubenswrapper[4805]: I1216 12:26:37.073911 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-l88fx"] Dec 16 12:26:37 crc kubenswrapper[4805]: I1216 12:26:37.082099 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2th2m"] Dec 16 12:26:38 crc kubenswrapper[4805]: I1216 12:26:38.538438 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bedb416e-1423-4c43-8676-f6843c51c7b0" path="/var/lib/kubelet/pods/bedb416e-1423-4c43-8676-f6843c51c7b0/volumes" Dec 16 12:26:38 crc kubenswrapper[4805]: I1216 12:26:38.539864 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0d4cec-a7ad-4887-8dc8-e7da47a7c878" path="/var/lib/kubelet/pods/ed0d4cec-a7ad-4887-8dc8-e7da47a7c878/volumes" Dec 16 12:26:50 crc kubenswrapper[4805]: I1216 12:26:50.523022 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:26:50 crc kubenswrapper[4805]: E1216 12:26:50.523880 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:26:57 crc kubenswrapper[4805]: I1216 12:26:57.655326 4805 scope.go:117] "RemoveContainer" containerID="c098b155aef353dd865ed392e4d7db83c6b4cf9da55f74658a418b5c260f9c3c" Dec 16 12:26:57 crc kubenswrapper[4805]: I1216 12:26:57.685698 4805 scope.go:117] "RemoveContainer" containerID="62eb1b9103f333ef9de12ee3767db5b2504d85daf1b363217f7912283f710a33" Dec 16 12:26:57 crc kubenswrapper[4805]: I1216 12:26:57.732423 4805 scope.go:117] "RemoveContainer" containerID="350c78c6c63507f88127a2e1d845e0aac634a99db3a3be94e1afe28884c84c59" Dec 16 12:27:01 crc kubenswrapper[4805]: I1216 12:27:01.522784 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:27:02 crc kubenswrapper[4805]: I1216 12:27:02.658472 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"956ad6f3f2307c0a334de402c87f8d9140bbfe291ff841eaa18f652f0b7f65f8"} Dec 16 12:27:08 crc kubenswrapper[4805]: I1216 12:27:08.040096 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sljmv"] Dec 16 12:27:08 crc kubenswrapper[4805]: I1216 12:27:08.049645 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sljmv"] Dec 16 12:27:08 crc kubenswrapper[4805]: I1216 12:27:08.535481 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bbda936-b96e-491c-9d8c-1e4595a42566" path="/var/lib/kubelet/pods/8bbda936-b96e-491c-9d8c-1e4595a42566/volumes" Dec 16 12:27:09 crc kubenswrapper[4805]: I1216 12:27:09.038117 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9v2nl"] Dec 16 12:27:09 crc kubenswrapper[4805]: I1216 12:27:09.053398 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9v2nl"] Dec 16 12:27:10 crc kubenswrapper[4805]: I1216 12:27:10.555566 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa98eb47-6335-4544-b5d6-718b55075000" path="/var/lib/kubelet/pods/aa98eb47-6335-4544-b5d6-718b55075000/volumes" Dec 16 12:27:32 crc kubenswrapper[4805]: I1216 12:27:32.043555 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9hjq9"] Dec 16 12:27:32 crc kubenswrapper[4805]: I1216 12:27:32.053794 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-fb4np"] Dec 16 12:27:32 crc kubenswrapper[4805]: I1216 12:27:32.064003 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-4kzsj"] Dec 16 12:27:32 crc kubenswrapper[4805]: I1216 12:27:32.072697 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-fb4np"] Dec 16 12:27:32 crc kubenswrapper[4805]: I1216 12:27:32.083043 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9hjq9"] Dec 16 12:27:32 crc kubenswrapper[4805]: I1216 12:27:32.092378 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-4kzsj"] Dec 16 12:27:32 crc kubenswrapper[4805]: I1216 12:27:32.538306 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9202f107-b728-488a-a01a-c4412be6e808" path="/var/lib/kubelet/pods/9202f107-b728-488a-a01a-c4412be6e808/volumes" Dec 16 12:27:32 crc kubenswrapper[4805]: I1216 12:27:32.539317 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceb88d05-d1a2-4da2-a4ec-64649b5069e9" path="/var/lib/kubelet/pods/ceb88d05-d1a2-4da2-a4ec-64649b5069e9/volumes" Dec 16 12:27:32 crc kubenswrapper[4805]: I1216 12:27:32.542522 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9477902-43a4-4af0-9d5b-0a063514b7e9" path="/var/lib/kubelet/pods/d9477902-43a4-4af0-9d5b-0a063514b7e9/volumes" Dec 16 12:27:41 crc kubenswrapper[4805]: I1216 12:27:41.027787 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3854-account-create-lmk7n"] Dec 16 12:27:41 crc kubenswrapper[4805]: I1216 12:27:41.037677 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3854-account-create-lmk7n"] Dec 16 12:27:42 crc kubenswrapper[4805]: I1216 12:27:42.028989 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-493e-account-create-ldm5n"] Dec 16 12:27:42 crc kubenswrapper[4805]: I1216 12:27:42.035333 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-493e-account-create-ldm5n"] Dec 16 12:27:42 crc kubenswrapper[4805]: I1216 12:27:42.540971 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d3adf2-5db0-47b1-8488-39d3c7a0c8db" path="/var/lib/kubelet/pods/46d3adf2-5db0-47b1-8488-39d3c7a0c8db/volumes" Dec 16 12:27:42 crc kubenswrapper[4805]: I1216 12:27:42.541733 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5868d05e-60ea-4a7a-b020-1f590921bd31" path="/var/lib/kubelet/pods/5868d05e-60ea-4a7a-b020-1f590921bd31/volumes" Dec 16 12:27:51 crc kubenswrapper[4805]: I1216 12:27:51.033617 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b002-account-create-v9rtw"] Dec 16 12:27:51 crc kubenswrapper[4805]: I1216 12:27:51.042897 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b002-account-create-v9rtw"] Dec 16 12:27:52 crc kubenswrapper[4805]: I1216 12:27:52.532682 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd" path="/var/lib/kubelet/pods/cd4d60a4-7c36-4922-bd16-ab9d3ce6b0bd/volumes" Dec 16 12:27:57 crc kubenswrapper[4805]: I1216 12:27:57.856303 4805 scope.go:117] "RemoveContainer" containerID="14860bbb92b244a7639360b6504afdff17c24be6a58d0f4367998484d9525903" Dec 16 12:27:57 crc kubenswrapper[4805]: I1216 12:27:57.881211 4805 scope.go:117] "RemoveContainer" containerID="577888b0a6ff989ca1016b9930991e75bc4527c3bbae56a147fa1e7bcda1b549" Dec 16 12:27:57 crc kubenswrapper[4805]: I1216 12:27:57.938572 4805 scope.go:117] "RemoveContainer" containerID="b05d4c0ab79b9c45ce7e8d704b5c869253d328f8b530dd6a7c96fea000b507ec" Dec 16 12:27:57 crc kubenswrapper[4805]: I1216 12:27:57.992215 4805 scope.go:117] "RemoveContainer" containerID="ccb345c51a5114a61f7c2937be509774097c505e5bbe196b3c0661a7b305d2a1" Dec 16 12:27:58 crc kubenswrapper[4805]: I1216 12:27:58.037970 4805 scope.go:117] "RemoveContainer" containerID="649bde08cbfc53f2717f54067820c222776a9c09a29bf9bd1e91ccfbf0ff0d29" Dec 16 12:27:58 crc kubenswrapper[4805]: I1216 12:27:58.086883 4805 scope.go:117] "RemoveContainer" containerID="ffae44ccbd45385089da2834ee56ff47a14f824b490e566daf19d2eb44a2b611" Dec 16 12:27:58 crc kubenswrapper[4805]: I1216 12:27:58.137487 4805 scope.go:117] "RemoveContainer" containerID="c8c5f0d29b119f9ab184cf85bddef944ae6359182ffbe96fd8612c9df390f09b" Dec 16 12:27:58 crc kubenswrapper[4805]: I1216 12:27:58.163642 4805 scope.go:117] "RemoveContainer" containerID="953796098ece18db3a779a60e5fcbe5a040afab6c7be6904f62098e0ba6a4b15" Dec 16 12:28:21 crc kubenswrapper[4805]: I1216 12:28:21.059118 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jwhtf"] Dec 16 12:28:21 crc kubenswrapper[4805]: I1216 12:28:21.074235 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jwhtf"] Dec 16 12:28:22 crc kubenswrapper[4805]: I1216 12:28:22.537488 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76f685ad-cc66-4b5a-bc99-f7af5604cbaa" path="/var/lib/kubelet/pods/76f685ad-cc66-4b5a-bc99-f7af5604cbaa/volumes" Dec 16 12:28:38 crc kubenswrapper[4805]: I1216 12:28:38.735457 4805 generic.go:334] "Generic (PLEG): container finished" podID="d062ee72-cb69-4cdc-93fe-1474435c0904" containerID="b11107e9dc645a8b968a1816ffb6742659eb7d8d8eee83a175d3f415165b70b6" exitCode=0 Dec 16 12:28:38 crc kubenswrapper[4805]: I1216 12:28:38.736013 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" event={"ID":"d062ee72-cb69-4cdc-93fe-1474435c0904","Type":"ContainerDied","Data":"b11107e9dc645a8b968a1816ffb6742659eb7d8d8eee83a175d3f415165b70b6"} Dec 16 12:28:39 crc kubenswrapper[4805]: I1216 12:28:39.922360 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fvvm6"] Dec 16 12:28:39 crc kubenswrapper[4805]: I1216 12:28:39.927764 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:28:39 crc kubenswrapper[4805]: I1216 12:28:39.941555 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvvm6"] Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.014425 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-utilities\") pod \"redhat-operators-fvvm6\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.014467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-catalog-content\") pod \"redhat-operators-fvvm6\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.014493 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzx87\" (UniqueName: \"kubernetes.io/projected/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-kube-api-access-vzx87\") pod \"redhat-operators-fvvm6\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.115896 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-utilities\") pod \"redhat-operators-fvvm6\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.115949 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-catalog-content\") pod \"redhat-operators-fvvm6\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.115982 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzx87\" (UniqueName: \"kubernetes.io/projected/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-kube-api-access-vzx87\") pod \"redhat-operators-fvvm6\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.116749 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-utilities\") pod \"redhat-operators-fvvm6\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.117483 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-catalog-content\") pod \"redhat-operators-fvvm6\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.140313 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzx87\" (UniqueName: \"kubernetes.io/projected/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-kube-api-access-vzx87\") pod \"redhat-operators-fvvm6\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.204882 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.217520 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8gwf\" (UniqueName: \"kubernetes.io/projected/d062ee72-cb69-4cdc-93fe-1474435c0904-kube-api-access-n8gwf\") pod \"d062ee72-cb69-4cdc-93fe-1474435c0904\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.217586 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-ssh-key\") pod \"d062ee72-cb69-4cdc-93fe-1474435c0904\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.217718 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-inventory\") pod \"d062ee72-cb69-4cdc-93fe-1474435c0904\" (UID: \"d062ee72-cb69-4cdc-93fe-1474435c0904\") " Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.221843 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d062ee72-cb69-4cdc-93fe-1474435c0904-kube-api-access-n8gwf" (OuterVolumeSpecName: "kube-api-access-n8gwf") pod "d062ee72-cb69-4cdc-93fe-1474435c0904" (UID: "d062ee72-cb69-4cdc-93fe-1474435c0904"). InnerVolumeSpecName "kube-api-access-n8gwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.291996 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.305326 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d062ee72-cb69-4cdc-93fe-1474435c0904" (UID: "d062ee72-cb69-4cdc-93fe-1474435c0904"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.323245 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8gwf\" (UniqueName: \"kubernetes.io/projected/d062ee72-cb69-4cdc-93fe-1474435c0904-kube-api-access-n8gwf\") on node \"crc\" DevicePath \"\"" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.323446 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.359460 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-inventory" (OuterVolumeSpecName: "inventory") pod "d062ee72-cb69-4cdc-93fe-1474435c0904" (UID: "d062ee72-cb69-4cdc-93fe-1474435c0904"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.424900 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d062ee72-cb69-4cdc-93fe-1474435c0904-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.756606 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" event={"ID":"d062ee72-cb69-4cdc-93fe-1474435c0904","Type":"ContainerDied","Data":"1e06e5143d72ba52ab31eb91dac21ac9062d06bfb1ccc4539f99bdd963c4543b"} Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.756912 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e06e5143d72ba52ab31eb91dac21ac9062d06bfb1ccc4539f99bdd963c4543b" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.756678 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.871168 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7"] Dec 16 12:28:40 crc kubenswrapper[4805]: E1216 12:28:40.871708 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d062ee72-cb69-4cdc-93fe-1474435c0904" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.871732 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d062ee72-cb69-4cdc-93fe-1474435c0904" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.871936 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d062ee72-cb69-4cdc-93fe-1474435c0904" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.872802 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.879299 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.879447 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.879442 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.879530 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.907033 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7"] Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.938789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk264\" (UniqueName: \"kubernetes.io/projected/9b007dae-6dbd-429b-85a3-a2087c098b68-kube-api-access-mk264\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.939122 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:28:40 crc kubenswrapper[4805]: I1216 12:28:40.939192 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.040505 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk264\" (UniqueName: \"kubernetes.io/projected/9b007dae-6dbd-429b-85a3-a2087c098b68-kube-api-access-mk264\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.040630 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.040656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.041906 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvvm6"] Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.048953 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.049301 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.064741 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk264\" (UniqueName: \"kubernetes.io/projected/9b007dae-6dbd-429b-85a3-a2087c098b68-kube-api-access-mk264\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.196921 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.658565 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7"] Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.663448 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.766216 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" event={"ID":"9b007dae-6dbd-429b-85a3-a2087c098b68","Type":"ContainerStarted","Data":"7c30269e49c7141ec421664da3843768e4c5c2a0ee08381e261f6dd8ac2b25d4"} Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.768557 4805 generic.go:334] "Generic (PLEG): container finished" podID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerID="37435387df5f0fc6dfdeff2b51ce45cdb10d462edac1f71167e8e0bb553fde6c" exitCode=0 Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.768607 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvm6" event={"ID":"03ae6b34-8a5e-4691-9ee2-d019a0c4927e","Type":"ContainerDied","Data":"37435387df5f0fc6dfdeff2b51ce45cdb10d462edac1f71167e8e0bb553fde6c"} Dec 16 12:28:41 crc kubenswrapper[4805]: I1216 12:28:41.768640 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvm6" event={"ID":"03ae6b34-8a5e-4691-9ee2-d019a0c4927e","Type":"ContainerStarted","Data":"41864decf4d7b565fdb5c497dc64be50c645d2563f6058308c5cc15cc7df9d7f"} Dec 16 12:28:43 crc kubenswrapper[4805]: I1216 12:28:43.801598 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvm6" event={"ID":"03ae6b34-8a5e-4691-9ee2-d019a0c4927e","Type":"ContainerStarted","Data":"c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc"} Dec 16 12:28:43 crc kubenswrapper[4805]: I1216 12:28:43.803851 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" event={"ID":"9b007dae-6dbd-429b-85a3-a2087c098b68","Type":"ContainerStarted","Data":"8be2e51cae9272c62ee0fd2860b8b2decde64911ea1b0aa4000e6c2a34e515e5"} Dec 16 12:28:44 crc kubenswrapper[4805]: I1216 12:28:44.843966 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" podStartSLOduration=3.904516733 podStartE2EDuration="4.843945914s" podCreationTimestamp="2025-12-16 12:28:40 +0000 UTC" firstStartedPulling="2025-12-16 12:28:41.663239583 +0000 UTC m=+1995.381497388" lastFinishedPulling="2025-12-16 12:28:42.602668774 +0000 UTC m=+1996.320926569" observedRunningTime="2025-12-16 12:28:43.844434167 +0000 UTC m=+1997.562691992" watchObservedRunningTime="2025-12-16 12:28:44.843945914 +0000 UTC m=+1998.562203729" Dec 16 12:28:52 crc kubenswrapper[4805]: I1216 12:28:52.886748 4805 generic.go:334] "Generic (PLEG): container finished" podID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerID="c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc" exitCode=0 Dec 16 12:28:52 crc kubenswrapper[4805]: I1216 12:28:52.886824 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvm6" event={"ID":"03ae6b34-8a5e-4691-9ee2-d019a0c4927e","Type":"ContainerDied","Data":"c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc"} Dec 16 12:28:53 crc kubenswrapper[4805]: I1216 12:28:53.897815 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvm6" event={"ID":"03ae6b34-8a5e-4691-9ee2-d019a0c4927e","Type":"ContainerStarted","Data":"b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10"} Dec 16 12:28:53 crc kubenswrapper[4805]: I1216 12:28:53.924835 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fvvm6" podStartSLOduration=3.221621927 podStartE2EDuration="14.924803804s" podCreationTimestamp="2025-12-16 12:28:39 +0000 UTC" firstStartedPulling="2025-12-16 12:28:41.771401835 +0000 UTC m=+1995.489659640" lastFinishedPulling="2025-12-16 12:28:53.474583712 +0000 UTC m=+2007.192841517" observedRunningTime="2025-12-16 12:28:53.917176386 +0000 UTC m=+2007.635434211" watchObservedRunningTime="2025-12-16 12:28:53.924803804 +0000 UTC m=+2007.643061649" Dec 16 12:28:58 crc kubenswrapper[4805]: I1216 12:28:58.330408 4805 scope.go:117] "RemoveContainer" containerID="552b27eb2df70cfeb8a37d30fa8ec0c68f64ff97687871e556c3135f2e670b68" Dec 16 12:29:00 crc kubenswrapper[4805]: I1216 12:29:00.293396 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:29:00 crc kubenswrapper[4805]: I1216 12:29:00.293733 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:29:01 crc kubenswrapper[4805]: I1216 12:29:01.350408 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fvvm6" podUID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerName="registry-server" probeResult="failure" output=< Dec 16 12:29:01 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 12:29:01 crc kubenswrapper[4805]: > Dec 16 12:29:10 crc kubenswrapper[4805]: I1216 12:29:10.339975 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:29:10 crc kubenswrapper[4805]: I1216 12:29:10.393426 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:29:11 crc kubenswrapper[4805]: I1216 12:29:11.116027 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvvm6"] Dec 16 12:29:12 crc kubenswrapper[4805]: I1216 12:29:12.083323 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fvvm6" podUID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerName="registry-server" containerID="cri-o://b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10" gracePeriod=2 Dec 16 12:29:12 crc kubenswrapper[4805]: I1216 12:29:12.550914 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:29:12 crc kubenswrapper[4805]: I1216 12:29:12.710054 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-catalog-content\") pod \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " Dec 16 12:29:12 crc kubenswrapper[4805]: I1216 12:29:12.710493 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-utilities\") pod \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " Dec 16 12:29:12 crc kubenswrapper[4805]: I1216 12:29:12.710580 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzx87\" (UniqueName: \"kubernetes.io/projected/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-kube-api-access-vzx87\") pod \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\" (UID: \"03ae6b34-8a5e-4691-9ee2-d019a0c4927e\") " Dec 16 12:29:12 crc kubenswrapper[4805]: I1216 12:29:12.712679 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-utilities" (OuterVolumeSpecName: "utilities") pod "03ae6b34-8a5e-4691-9ee2-d019a0c4927e" (UID: "03ae6b34-8a5e-4691-9ee2-d019a0c4927e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:29:12 crc kubenswrapper[4805]: I1216 12:29:12.717390 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-kube-api-access-vzx87" (OuterVolumeSpecName: "kube-api-access-vzx87") pod "03ae6b34-8a5e-4691-9ee2-d019a0c4927e" (UID: "03ae6b34-8a5e-4691-9ee2-d019a0c4927e"). InnerVolumeSpecName "kube-api-access-vzx87". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:29:12 crc kubenswrapper[4805]: I1216 12:29:12.813918 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:29:12 crc kubenswrapper[4805]: I1216 12:29:12.813965 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzx87\" (UniqueName: \"kubernetes.io/projected/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-kube-api-access-vzx87\") on node \"crc\" DevicePath \"\"" Dec 16 12:29:12 crc kubenswrapper[4805]: I1216 12:29:12.834398 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03ae6b34-8a5e-4691-9ee2-d019a0c4927e" (UID: "03ae6b34-8a5e-4691-9ee2-d019a0c4927e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:29:12 crc kubenswrapper[4805]: I1216 12:29:12.915878 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ae6b34-8a5e-4691-9ee2-d019a0c4927e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.094984 4805 generic.go:334] "Generic (PLEG): container finished" podID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerID="b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10" exitCode=0 Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.095026 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvm6" event={"ID":"03ae6b34-8a5e-4691-9ee2-d019a0c4927e","Type":"ContainerDied","Data":"b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10"} Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.095060 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvm6" event={"ID":"03ae6b34-8a5e-4691-9ee2-d019a0c4927e","Type":"ContainerDied","Data":"41864decf4d7b565fdb5c497dc64be50c645d2563f6058308c5cc15cc7df9d7f"} Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.095080 4805 scope.go:117] "RemoveContainer" containerID="b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10" Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.095085 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvvm6" Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.129435 4805 scope.go:117] "RemoveContainer" containerID="c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc" Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.160016 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvvm6"] Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.172540 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fvvm6"] Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.175585 4805 scope.go:117] "RemoveContainer" containerID="37435387df5f0fc6dfdeff2b51ce45cdb10d462edac1f71167e8e0bb553fde6c" Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.222210 4805 scope.go:117] "RemoveContainer" containerID="b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10" Dec 16 12:29:13 crc kubenswrapper[4805]: E1216 12:29:13.223542 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10\": container with ID starting with b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10 not found: ID does not exist" containerID="b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10" Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.223705 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10"} err="failed to get container status \"b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10\": rpc error: code = NotFound desc = could not find container \"b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10\": container with ID starting with b5554c830466abfae24ba4ee2b19628e492516475171aeceb2ee263fd212ad10 not found: ID does not exist" Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.223819 4805 scope.go:117] "RemoveContainer" containerID="c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc" Dec 16 12:29:13 crc kubenswrapper[4805]: E1216 12:29:13.224592 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc\": container with ID starting with c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc not found: ID does not exist" containerID="c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc" Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.224701 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc"} err="failed to get container status \"c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc\": rpc error: code = NotFound desc = could not find container \"c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc\": container with ID starting with c8c7e152933ea89987f807a63ab31ffd559e275a6ed6b07efe0c593563e91ebc not found: ID does not exist" Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.224868 4805 scope.go:117] "RemoveContainer" containerID="37435387df5f0fc6dfdeff2b51ce45cdb10d462edac1f71167e8e0bb553fde6c" Dec 16 12:29:13 crc kubenswrapper[4805]: E1216 12:29:13.225519 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37435387df5f0fc6dfdeff2b51ce45cdb10d462edac1f71167e8e0bb553fde6c\": container with ID starting with 37435387df5f0fc6dfdeff2b51ce45cdb10d462edac1f71167e8e0bb553fde6c not found: ID does not exist" containerID="37435387df5f0fc6dfdeff2b51ce45cdb10d462edac1f71167e8e0bb553fde6c" Dec 16 12:29:13 crc kubenswrapper[4805]: I1216 12:29:13.225729 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37435387df5f0fc6dfdeff2b51ce45cdb10d462edac1f71167e8e0bb553fde6c"} err="failed to get container status \"37435387df5f0fc6dfdeff2b51ce45cdb10d462edac1f71167e8e0bb553fde6c\": rpc error: code = NotFound desc = could not find container \"37435387df5f0fc6dfdeff2b51ce45cdb10d462edac1f71167e8e0bb553fde6c\": container with ID starting with 37435387df5f0fc6dfdeff2b51ce45cdb10d462edac1f71167e8e0bb553fde6c not found: ID does not exist" Dec 16 12:29:14 crc kubenswrapper[4805]: I1216 12:29:14.535466 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" path="/var/lib/kubelet/pods/03ae6b34-8a5e-4691-9ee2-d019a0c4927e/volumes" Dec 16 12:29:22 crc kubenswrapper[4805]: I1216 12:29:22.040357 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-c26fd"] Dec 16 12:29:22 crc kubenswrapper[4805]: I1216 12:29:22.047717 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-c26fd"] Dec 16 12:29:22 crc kubenswrapper[4805]: I1216 12:29:22.534129 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73a7a5f1-017b-4be6-b498-9c9d7416fada" path="/var/lib/kubelet/pods/73a7a5f1-017b-4be6-b498-9c9d7416fada/volumes" Dec 16 12:29:27 crc kubenswrapper[4805]: I1216 12:29:27.071402 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:29:27 crc kubenswrapper[4805]: I1216 12:29:27.072039 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:29:32 crc kubenswrapper[4805]: I1216 12:29:32.070794 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8n694"] Dec 16 12:29:32 crc kubenswrapper[4805]: I1216 12:29:32.091555 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8n694"] Dec 16 12:29:32 crc kubenswrapper[4805]: I1216 12:29:32.533689 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f06d9a3-948e-4e23-9ff0-e933b67ecb5f" path="/var/lib/kubelet/pods/6f06d9a3-948e-4e23-9ff0-e933b67ecb5f/volumes" Dec 16 12:29:57 crc kubenswrapper[4805]: I1216 12:29:57.071903 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:29:57 crc kubenswrapper[4805]: I1216 12:29:57.073694 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:29:58 crc kubenswrapper[4805]: I1216 12:29:58.837856 4805 scope.go:117] "RemoveContainer" containerID="63ecef3c06292f6c8645c7058de4184a348d677eae434d60d79d38f100662248" Dec 16 12:29:58 crc kubenswrapper[4805]: I1216 12:29:58.880227 4805 scope.go:117] "RemoveContainer" containerID="f39f20c711d9cf1226294beb703afab43635a50ca800b65894720531a4a46750" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.147924 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv"] Dec 16 12:30:00 crc kubenswrapper[4805]: E1216 12:30:00.148807 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerName="extract-content" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.148821 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerName="extract-content" Dec 16 12:30:00 crc kubenswrapper[4805]: E1216 12:30:00.148853 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerName="registry-server" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.148859 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerName="registry-server" Dec 16 12:30:00 crc kubenswrapper[4805]: E1216 12:30:00.148869 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerName="extract-utilities" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.148874 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerName="extract-utilities" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.149085 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ae6b34-8a5e-4691-9ee2-d019a0c4927e" containerName="registry-server" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.149830 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.152550 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.153828 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.159543 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv"] Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.274490 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdh75\" (UniqueName: \"kubernetes.io/projected/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-kube-api-access-bdh75\") pod \"collect-profiles-29431470-n8kpv\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.274695 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-config-volume\") pod \"collect-profiles-29431470-n8kpv\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.274862 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-secret-volume\") pod \"collect-profiles-29431470-n8kpv\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.376499 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-config-volume\") pod \"collect-profiles-29431470-n8kpv\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.377300 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-config-volume\") pod \"collect-profiles-29431470-n8kpv\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.377584 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-secret-volume\") pod \"collect-profiles-29431470-n8kpv\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.377931 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdh75\" (UniqueName: \"kubernetes.io/projected/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-kube-api-access-bdh75\") pod \"collect-profiles-29431470-n8kpv\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.385989 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-secret-volume\") pod \"collect-profiles-29431470-n8kpv\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.397809 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdh75\" (UniqueName: \"kubernetes.io/projected/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-kube-api-access-bdh75\") pod \"collect-profiles-29431470-n8kpv\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.469435 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:00 crc kubenswrapper[4805]: I1216 12:30:00.940198 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv"] Dec 16 12:30:01 crc kubenswrapper[4805]: I1216 12:30:01.611125 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" event={"ID":"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7","Type":"ContainerStarted","Data":"d94d654db4b56ce7b9711854867cbdc5ffc0c00e262109b75f53332967c865a1"} Dec 16 12:30:01 crc kubenswrapper[4805]: I1216 12:30:01.611217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" event={"ID":"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7","Type":"ContainerStarted","Data":"99509a49cab72b4afe0659c0f3cd376b5481144c14913f933f9d171c14537775"} Dec 16 12:30:03 crc kubenswrapper[4805]: I1216 12:30:03.662525 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" podStartSLOduration=3.662485774 podStartE2EDuration="3.662485774s" podCreationTimestamp="2025-12-16 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:03.648598746 +0000 UTC m=+2077.366856571" watchObservedRunningTime="2025-12-16 12:30:03.662485774 +0000 UTC m=+2077.380743649" Dec 16 12:30:05 crc kubenswrapper[4805]: I1216 12:30:05.047631 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-h574q"] Dec 16 12:30:05 crc kubenswrapper[4805]: I1216 12:30:05.054907 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-h574q"] Dec 16 12:30:05 crc kubenswrapper[4805]: I1216 12:30:05.659825 4805 generic.go:334] "Generic (PLEG): container finished" podID="ccfeac0e-ca88-4939-8ef7-f84513bc4eb7" containerID="d94d654db4b56ce7b9711854867cbdc5ffc0c00e262109b75f53332967c865a1" exitCode=0 Dec 16 12:30:05 crc kubenswrapper[4805]: I1216 12:30:05.659870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" event={"ID":"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7","Type":"ContainerDied","Data":"d94d654db4b56ce7b9711854867cbdc5ffc0c00e262109b75f53332967c865a1"} Dec 16 12:30:06 crc kubenswrapper[4805]: I1216 12:30:06.534074 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8213d94b-5cc7-407e-aedf-298f52f52198" path="/var/lib/kubelet/pods/8213d94b-5cc7-407e-aedf-298f52f52198/volumes" Dec 16 12:30:06 crc kubenswrapper[4805]: I1216 12:30:06.991383 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.119902 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-secret-volume\") pod \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.120057 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdh75\" (UniqueName: \"kubernetes.io/projected/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-kube-api-access-bdh75\") pod \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.120097 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-config-volume\") pod \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\" (UID: \"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7\") " Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.120761 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-config-volume" (OuterVolumeSpecName: "config-volume") pod "ccfeac0e-ca88-4939-8ef7-f84513bc4eb7" (UID: "ccfeac0e-ca88-4939-8ef7-f84513bc4eb7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.126701 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-kube-api-access-bdh75" (OuterVolumeSpecName: "kube-api-access-bdh75") pod "ccfeac0e-ca88-4939-8ef7-f84513bc4eb7" (UID: "ccfeac0e-ca88-4939-8ef7-f84513bc4eb7"). InnerVolumeSpecName "kube-api-access-bdh75". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.136563 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ccfeac0e-ca88-4939-8ef7-f84513bc4eb7" (UID: "ccfeac0e-ca88-4939-8ef7-f84513bc4eb7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.221938 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdh75\" (UniqueName: \"kubernetes.io/projected/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-kube-api-access-bdh75\") on node \"crc\" DevicePath \"\"" Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.221979 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.221991 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.683594 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" event={"ID":"ccfeac0e-ca88-4939-8ef7-f84513bc4eb7","Type":"ContainerDied","Data":"99509a49cab72b4afe0659c0f3cd376b5481144c14913f933f9d171c14537775"} Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.683650 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99509a49cab72b4afe0659c0f3cd376b5481144c14913f933f9d171c14537775" Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.683650 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv" Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.752532 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc"] Dec 16 12:30:07 crc kubenswrapper[4805]: I1216 12:30:07.761517 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431425-kc9qc"] Dec 16 12:30:08 crc kubenswrapper[4805]: I1216 12:30:08.541350 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9b2f10-5fc2-4784-9c34-5fc3ed544115" path="/var/lib/kubelet/pods/ce9b2f10-5fc2-4784-9c34-5fc3ed544115/volumes" Dec 16 12:30:08 crc kubenswrapper[4805]: I1216 12:30:08.696178 4805 generic.go:334] "Generic (PLEG): container finished" podID="9b007dae-6dbd-429b-85a3-a2087c098b68" containerID="8be2e51cae9272c62ee0fd2860b8b2decde64911ea1b0aa4000e6c2a34e515e5" exitCode=0 Dec 16 12:30:08 crc kubenswrapper[4805]: I1216 12:30:08.696239 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" event={"ID":"9b007dae-6dbd-429b-85a3-a2087c098b68","Type":"ContainerDied","Data":"8be2e51cae9272c62ee0fd2860b8b2decde64911ea1b0aa4000e6c2a34e515e5"} Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.195714 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.293162 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-inventory\") pod \"9b007dae-6dbd-429b-85a3-a2087c098b68\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.293291 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-ssh-key\") pod \"9b007dae-6dbd-429b-85a3-a2087c098b68\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.293376 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk264\" (UniqueName: \"kubernetes.io/projected/9b007dae-6dbd-429b-85a3-a2087c098b68-kube-api-access-mk264\") pod \"9b007dae-6dbd-429b-85a3-a2087c098b68\" (UID: \"9b007dae-6dbd-429b-85a3-a2087c098b68\") " Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.303703 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b007dae-6dbd-429b-85a3-a2087c098b68-kube-api-access-mk264" (OuterVolumeSpecName: "kube-api-access-mk264") pod "9b007dae-6dbd-429b-85a3-a2087c098b68" (UID: "9b007dae-6dbd-429b-85a3-a2087c098b68"). InnerVolumeSpecName "kube-api-access-mk264". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.331864 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-inventory" (OuterVolumeSpecName: "inventory") pod "9b007dae-6dbd-429b-85a3-a2087c098b68" (UID: "9b007dae-6dbd-429b-85a3-a2087c098b68"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.342803 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9b007dae-6dbd-429b-85a3-a2087c098b68" (UID: "9b007dae-6dbd-429b-85a3-a2087c098b68"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.397622 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.397716 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9b007dae-6dbd-429b-85a3-a2087c098b68-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.397733 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk264\" (UniqueName: \"kubernetes.io/projected/9b007dae-6dbd-429b-85a3-a2087c098b68-kube-api-access-mk264\") on node \"crc\" DevicePath \"\"" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.716481 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" event={"ID":"9b007dae-6dbd-429b-85a3-a2087c098b68","Type":"ContainerDied","Data":"7c30269e49c7141ec421664da3843768e4c5c2a0ee08381e261f6dd8ac2b25d4"} Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.716521 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c30269e49c7141ec421664da3843768e4c5c2a0ee08381e261f6dd8ac2b25d4" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.716568 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.903496 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f"] Dec 16 12:30:10 crc kubenswrapper[4805]: E1216 12:30:10.904344 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b007dae-6dbd-429b-85a3-a2087c098b68" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.904362 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b007dae-6dbd-429b-85a3-a2087c098b68" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 16 12:30:10 crc kubenswrapper[4805]: E1216 12:30:10.904372 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfeac0e-ca88-4939-8ef7-f84513bc4eb7" containerName="collect-profiles" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.904380 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfeac0e-ca88-4939-8ef7-f84513bc4eb7" containerName="collect-profiles" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.904611 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfeac0e-ca88-4939-8ef7-f84513bc4eb7" containerName="collect-profiles" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.904624 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b007dae-6dbd-429b-85a3-a2087c098b68" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.905344 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.911886 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.911978 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.912022 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.911886 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:30:10 crc kubenswrapper[4805]: I1216 12:30:10.938325 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f"] Dec 16 12:30:11 crc kubenswrapper[4805]: I1216 12:30:11.014709 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn97d\" (UniqueName: \"kubernetes.io/projected/87d9250a-d08e-4cfe-9619-d48ebdb2753c-kube-api-access-gn97d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:11 crc kubenswrapper[4805]: I1216 12:30:11.015026 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:11 crc kubenswrapper[4805]: I1216 12:30:11.015405 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:11 crc kubenswrapper[4805]: I1216 12:30:11.117784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn97d\" (UniqueName: \"kubernetes.io/projected/87d9250a-d08e-4cfe-9619-d48ebdb2753c-kube-api-access-gn97d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:11 crc kubenswrapper[4805]: I1216 12:30:11.117947 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:11 crc kubenswrapper[4805]: I1216 12:30:11.118027 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:11 crc kubenswrapper[4805]: I1216 12:30:11.123832 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:11 crc kubenswrapper[4805]: I1216 12:30:11.129519 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:11 crc kubenswrapper[4805]: I1216 12:30:11.139881 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn97d\" (UniqueName: \"kubernetes.io/projected/87d9250a-d08e-4cfe-9619-d48ebdb2753c-kube-api-access-gn97d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:11 crc kubenswrapper[4805]: I1216 12:30:11.245011 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:11 crc kubenswrapper[4805]: I1216 12:30:11.830972 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f"] Dec 16 12:30:12 crc kubenswrapper[4805]: I1216 12:30:12.735764 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" event={"ID":"87d9250a-d08e-4cfe-9619-d48ebdb2753c","Type":"ContainerStarted","Data":"9e04dc289f36cf3899dc28050809e6be0fd3ccaf1c31187f8be3603f4077aab8"} Dec 16 12:30:14 crc kubenswrapper[4805]: I1216 12:30:14.754775 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" event={"ID":"87d9250a-d08e-4cfe-9619-d48ebdb2753c","Type":"ContainerStarted","Data":"79f9fc6720dce1d541ea5f7c8375317616e50e36c1ecddabadeeb1fa4cfa0a4a"} Dec 16 12:30:14 crc kubenswrapper[4805]: I1216 12:30:14.778189 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" podStartSLOduration=3.110836878 podStartE2EDuration="4.778164844s" podCreationTimestamp="2025-12-16 12:30:10 +0000 UTC" firstStartedPulling="2025-12-16 12:30:11.859393571 +0000 UTC m=+2085.577651366" lastFinishedPulling="2025-12-16 12:30:13.526721527 +0000 UTC m=+2087.244979332" observedRunningTime="2025-12-16 12:30:14.768465356 +0000 UTC m=+2088.486723171" watchObservedRunningTime="2025-12-16 12:30:14.778164844 +0000 UTC m=+2088.496422659" Dec 16 12:30:19 crc kubenswrapper[4805]: I1216 12:30:19.799324 4805 generic.go:334] "Generic (PLEG): container finished" podID="87d9250a-d08e-4cfe-9619-d48ebdb2753c" containerID="79f9fc6720dce1d541ea5f7c8375317616e50e36c1ecddabadeeb1fa4cfa0a4a" exitCode=0 Dec 16 12:30:19 crc kubenswrapper[4805]: I1216 12:30:19.799415 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" event={"ID":"87d9250a-d08e-4cfe-9619-d48ebdb2753c","Type":"ContainerDied","Data":"79f9fc6720dce1d541ea5f7c8375317616e50e36c1ecddabadeeb1fa4cfa0a4a"} Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.249432 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.340276 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn97d\" (UniqueName: \"kubernetes.io/projected/87d9250a-d08e-4cfe-9619-d48ebdb2753c-kube-api-access-gn97d\") pod \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.340352 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-inventory\") pod \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.340588 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-ssh-key\") pod \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\" (UID: \"87d9250a-d08e-4cfe-9619-d48ebdb2753c\") " Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.346416 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d9250a-d08e-4cfe-9619-d48ebdb2753c-kube-api-access-gn97d" (OuterVolumeSpecName: "kube-api-access-gn97d") pod "87d9250a-d08e-4cfe-9619-d48ebdb2753c" (UID: "87d9250a-d08e-4cfe-9619-d48ebdb2753c"). InnerVolumeSpecName "kube-api-access-gn97d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.368730 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "87d9250a-d08e-4cfe-9619-d48ebdb2753c" (UID: "87d9250a-d08e-4cfe-9619-d48ebdb2753c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.375339 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-inventory" (OuterVolumeSpecName: "inventory") pod "87d9250a-d08e-4cfe-9619-d48ebdb2753c" (UID: "87d9250a-d08e-4cfe-9619-d48ebdb2753c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.443371 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.443429 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn97d\" (UniqueName: \"kubernetes.io/projected/87d9250a-d08e-4cfe-9619-d48ebdb2753c-kube-api-access-gn97d\") on node \"crc\" DevicePath \"\"" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.443443 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d9250a-d08e-4cfe-9619-d48ebdb2753c-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.819554 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" event={"ID":"87d9250a-d08e-4cfe-9619-d48ebdb2753c","Type":"ContainerDied","Data":"9e04dc289f36cf3899dc28050809e6be0fd3ccaf1c31187f8be3603f4077aab8"} Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.819594 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e04dc289f36cf3899dc28050809e6be0fd3ccaf1c31187f8be3603f4077aab8" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.819640 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.930876 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs"] Dec 16 12:30:21 crc kubenswrapper[4805]: E1216 12:30:21.931319 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d9250a-d08e-4cfe-9619-d48ebdb2753c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.931339 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d9250a-d08e-4cfe-9619-d48ebdb2753c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.931531 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d9250a-d08e-4cfe-9619-d48ebdb2753c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.932405 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.934323 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.935468 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.935623 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.935773 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.948486 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs"] Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.952073 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpj78\" (UniqueName: \"kubernetes.io/projected/c3d927b4-1bc7-4093-ad62-19dd87d9888a-kube-api-access-wpj78\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-brghs\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.952316 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-brghs\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:30:21 crc kubenswrapper[4805]: I1216 12:30:21.952454 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-brghs\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:30:22 crc kubenswrapper[4805]: I1216 12:30:22.053618 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpj78\" (UniqueName: \"kubernetes.io/projected/c3d927b4-1bc7-4093-ad62-19dd87d9888a-kube-api-access-wpj78\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-brghs\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:30:22 crc kubenswrapper[4805]: I1216 12:30:22.053698 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-brghs\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:30:22 crc kubenswrapper[4805]: I1216 12:30:22.053773 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-brghs\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:30:22 crc kubenswrapper[4805]: I1216 12:30:22.057419 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-brghs\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:30:22 crc kubenswrapper[4805]: I1216 12:30:22.058557 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-brghs\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:30:22 crc kubenswrapper[4805]: I1216 12:30:22.072923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpj78\" (UniqueName: \"kubernetes.io/projected/c3d927b4-1bc7-4093-ad62-19dd87d9888a-kube-api-access-wpj78\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-brghs\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:30:22 crc kubenswrapper[4805]: I1216 12:30:22.255860 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:30:22 crc kubenswrapper[4805]: I1216 12:30:22.792183 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs"] Dec 16 12:30:22 crc kubenswrapper[4805]: I1216 12:30:22.830525 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" event={"ID":"c3d927b4-1bc7-4093-ad62-19dd87d9888a","Type":"ContainerStarted","Data":"5298e433e0609425e5a2bb5d43a96b5fb5736e2b07fd7778a3a3611055027643"} Dec 16 12:30:23 crc kubenswrapper[4805]: I1216 12:30:23.841829 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" event={"ID":"c3d927b4-1bc7-4093-ad62-19dd87d9888a","Type":"ContainerStarted","Data":"463b9d24ed8a3952cc756280689f34c40dfccd583fa240b166218ff1611dcec2"} Dec 16 12:30:23 crc kubenswrapper[4805]: I1216 12:30:23.861637 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" podStartSLOduration=2.179643841 podStartE2EDuration="2.861614013s" podCreationTimestamp="2025-12-16 12:30:21 +0000 UTC" firstStartedPulling="2025-12-16 12:30:22.792668111 +0000 UTC m=+2096.510925916" lastFinishedPulling="2025-12-16 12:30:23.474638283 +0000 UTC m=+2097.192896088" observedRunningTime="2025-12-16 12:30:23.858687169 +0000 UTC m=+2097.576944974" watchObservedRunningTime="2025-12-16 12:30:23.861614013 +0000 UTC m=+2097.579871828" Dec 16 12:30:27 crc kubenswrapper[4805]: I1216 12:30:27.072300 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:30:27 crc kubenswrapper[4805]: I1216 12:30:27.072934 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:30:27 crc kubenswrapper[4805]: I1216 12:30:27.072986 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:30:27 crc kubenswrapper[4805]: I1216 12:30:27.073660 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"956ad6f3f2307c0a334de402c87f8d9140bbfe291ff841eaa18f652f0b7f65f8"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:30:27 crc kubenswrapper[4805]: I1216 12:30:27.073715 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://956ad6f3f2307c0a334de402c87f8d9140bbfe291ff841eaa18f652f0b7f65f8" gracePeriod=600 Dec 16 12:30:27 crc kubenswrapper[4805]: I1216 12:30:27.885569 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="956ad6f3f2307c0a334de402c87f8d9140bbfe291ff841eaa18f652f0b7f65f8" exitCode=0 Dec 16 12:30:27 crc kubenswrapper[4805]: I1216 12:30:27.885655 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"956ad6f3f2307c0a334de402c87f8d9140bbfe291ff841eaa18f652f0b7f65f8"} Dec 16 12:30:27 crc kubenswrapper[4805]: I1216 12:30:27.886452 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b"} Dec 16 12:30:27 crc kubenswrapper[4805]: I1216 12:30:27.886515 4805 scope.go:117] "RemoveContainer" containerID="a74619ee5355232ac11cd64844f6a42fb000940a2364b7b7180ea47b81f64308" Dec 16 12:30:59 crc kubenswrapper[4805]: I1216 12:30:59.039585 4805 scope.go:117] "RemoveContainer" containerID="9bfca712ca63c20fa5d088250ff1b0f5b3ee8b603e1647e152a27f782cce0573" Dec 16 12:30:59 crc kubenswrapper[4805]: I1216 12:30:59.095110 4805 scope.go:117] "RemoveContainer" containerID="2800e45a02b68fc9c957a021438bfc2a843fa13420d2c3618f2e56720135844b" Dec 16 12:31:05 crc kubenswrapper[4805]: I1216 12:31:05.254564 4805 generic.go:334] "Generic (PLEG): container finished" podID="c3d927b4-1bc7-4093-ad62-19dd87d9888a" containerID="463b9d24ed8a3952cc756280689f34c40dfccd583fa240b166218ff1611dcec2" exitCode=0 Dec 16 12:31:05 crc kubenswrapper[4805]: I1216 12:31:05.254686 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" event={"ID":"c3d927b4-1bc7-4093-ad62-19dd87d9888a","Type":"ContainerDied","Data":"463b9d24ed8a3952cc756280689f34c40dfccd583fa240b166218ff1611dcec2"} Dec 16 12:31:06 crc kubenswrapper[4805]: I1216 12:31:06.684229 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:31:06 crc kubenswrapper[4805]: I1216 12:31:06.784190 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-ssh-key\") pod \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " Dec 16 12:31:06 crc kubenswrapper[4805]: I1216 12:31:06.784314 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpj78\" (UniqueName: \"kubernetes.io/projected/c3d927b4-1bc7-4093-ad62-19dd87d9888a-kube-api-access-wpj78\") pod \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " Dec 16 12:31:06 crc kubenswrapper[4805]: I1216 12:31:06.784417 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-inventory\") pod \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\" (UID: \"c3d927b4-1bc7-4093-ad62-19dd87d9888a\") " Dec 16 12:31:06 crc kubenswrapper[4805]: I1216 12:31:06.789681 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d927b4-1bc7-4093-ad62-19dd87d9888a-kube-api-access-wpj78" (OuterVolumeSpecName: "kube-api-access-wpj78") pod "c3d927b4-1bc7-4093-ad62-19dd87d9888a" (UID: "c3d927b4-1bc7-4093-ad62-19dd87d9888a"). InnerVolumeSpecName "kube-api-access-wpj78". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:31:06 crc kubenswrapper[4805]: I1216 12:31:06.819702 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-inventory" (OuterVolumeSpecName: "inventory") pod "c3d927b4-1bc7-4093-ad62-19dd87d9888a" (UID: "c3d927b4-1bc7-4093-ad62-19dd87d9888a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:31:06 crc kubenswrapper[4805]: I1216 12:31:06.840912 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c3d927b4-1bc7-4093-ad62-19dd87d9888a" (UID: "c3d927b4-1bc7-4093-ad62-19dd87d9888a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:31:06 crc kubenswrapper[4805]: I1216 12:31:06.886496 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:31:06 crc kubenswrapper[4805]: I1216 12:31:06.886528 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpj78\" (UniqueName: \"kubernetes.io/projected/c3d927b4-1bc7-4093-ad62-19dd87d9888a-kube-api-access-wpj78\") on node \"crc\" DevicePath \"\"" Dec 16 12:31:06 crc kubenswrapper[4805]: I1216 12:31:06.886539 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3d927b4-1bc7-4093-ad62-19dd87d9888a-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.275732 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" event={"ID":"c3d927b4-1bc7-4093-ad62-19dd87d9888a","Type":"ContainerDied","Data":"5298e433e0609425e5a2bb5d43a96b5fb5736e2b07fd7778a3a3611055027643"} Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.275994 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5298e433e0609425e5a2bb5d43a96b5fb5736e2b07fd7778a3a3611055027643" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.275787 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-brghs" Dec 16 12:31:07 crc kubenswrapper[4805]: E1216 12:31:07.350877 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3d927b4_1bc7_4093_ad62_19dd87d9888a.slice\": RecentStats: unable to find data in memory cache]" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.442262 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk"] Dec 16 12:31:07 crc kubenswrapper[4805]: E1216 12:31:07.442834 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d927b4-1bc7-4093-ad62-19dd87d9888a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.442864 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d927b4-1bc7-4093-ad62-19dd87d9888a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.443165 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d927b4-1bc7-4093-ad62-19dd87d9888a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.443980 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.449517 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.451795 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.452326 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.453390 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.461648 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk"] Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.602829 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.603268 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hshcj\" (UniqueName: \"kubernetes.io/projected/9cb116b7-43db-435a-b4b1-59447b57c611-kube-api-access-hshcj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.603378 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.705289 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hshcj\" (UniqueName: \"kubernetes.io/projected/9cb116b7-43db-435a-b4b1-59447b57c611-kube-api-access-hshcj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.705344 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.705454 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.711084 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.712389 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.726072 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hshcj\" (UniqueName: \"kubernetes.io/projected/9cb116b7-43db-435a-b4b1-59447b57c611-kube-api-access-hshcj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:31:07 crc kubenswrapper[4805]: I1216 12:31:07.774685 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:31:08 crc kubenswrapper[4805]: I1216 12:31:08.311250 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk"] Dec 16 12:31:09 crc kubenswrapper[4805]: I1216 12:31:09.297681 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" event={"ID":"9cb116b7-43db-435a-b4b1-59447b57c611","Type":"ContainerStarted","Data":"fd5881bf415df09fc52f80e6038a9613bf68cfcf203af1dce8ee0f7ede8c3cb3"} Dec 16 12:31:11 crc kubenswrapper[4805]: I1216 12:31:11.317599 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" event={"ID":"9cb116b7-43db-435a-b4b1-59447b57c611","Type":"ContainerStarted","Data":"63a97fb7ae03da8a5c83d055d85b5c5b2fcf01a5608b82fbbaf3cae85d22a235"} Dec 16 12:31:11 crc kubenswrapper[4805]: I1216 12:31:11.342778 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" podStartSLOduration=1.9504287329999999 podStartE2EDuration="4.342746433s" podCreationTimestamp="2025-12-16 12:31:07 +0000 UTC" firstStartedPulling="2025-12-16 12:31:08.315562083 +0000 UTC m=+2142.033819888" lastFinishedPulling="2025-12-16 12:31:10.707879773 +0000 UTC m=+2144.426137588" observedRunningTime="2025-12-16 12:31:11.334487546 +0000 UTC m=+2145.052745351" watchObservedRunningTime="2025-12-16 12:31:11.342746433 +0000 UTC m=+2145.061004248" Dec 16 12:32:05 crc kubenswrapper[4805]: I1216 12:32:05.833668 4805 generic.go:334] "Generic (PLEG): container finished" podID="9cb116b7-43db-435a-b4b1-59447b57c611" containerID="63a97fb7ae03da8a5c83d055d85b5c5b2fcf01a5608b82fbbaf3cae85d22a235" exitCode=0 Dec 16 12:32:05 crc kubenswrapper[4805]: I1216 12:32:05.834269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" event={"ID":"9cb116b7-43db-435a-b4b1-59447b57c611","Type":"ContainerDied","Data":"63a97fb7ae03da8a5c83d055d85b5c5b2fcf01a5608b82fbbaf3cae85d22a235"} Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.303444 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.453958 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hshcj\" (UniqueName: \"kubernetes.io/projected/9cb116b7-43db-435a-b4b1-59447b57c611-kube-api-access-hshcj\") pod \"9cb116b7-43db-435a-b4b1-59447b57c611\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.454005 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-ssh-key\") pod \"9cb116b7-43db-435a-b4b1-59447b57c611\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.454132 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-inventory\") pod \"9cb116b7-43db-435a-b4b1-59447b57c611\" (UID: \"9cb116b7-43db-435a-b4b1-59447b57c611\") " Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.464516 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb116b7-43db-435a-b4b1-59447b57c611-kube-api-access-hshcj" (OuterVolumeSpecName: "kube-api-access-hshcj") pod "9cb116b7-43db-435a-b4b1-59447b57c611" (UID: "9cb116b7-43db-435a-b4b1-59447b57c611"). InnerVolumeSpecName "kube-api-access-hshcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.486950 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-inventory" (OuterVolumeSpecName: "inventory") pod "9cb116b7-43db-435a-b4b1-59447b57c611" (UID: "9cb116b7-43db-435a-b4b1-59447b57c611"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.501476 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9cb116b7-43db-435a-b4b1-59447b57c611" (UID: "9cb116b7-43db-435a-b4b1-59447b57c611"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.556255 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.556286 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb116b7-43db-435a-b4b1-59447b57c611-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.556296 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hshcj\" (UniqueName: \"kubernetes.io/projected/9cb116b7-43db-435a-b4b1-59447b57c611-kube-api-access-hshcj\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.863692 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" event={"ID":"9cb116b7-43db-435a-b4b1-59447b57c611","Type":"ContainerDied","Data":"fd5881bf415df09fc52f80e6038a9613bf68cfcf203af1dce8ee0f7ede8c3cb3"} Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.863725 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.863737 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd5881bf415df09fc52f80e6038a9613bf68cfcf203af1dce8ee0f7ede8c3cb3" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.963030 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jw42t"] Dec 16 12:32:07 crc kubenswrapper[4805]: E1216 12:32:07.966743 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb116b7-43db-435a-b4b1-59447b57c611" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.966769 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb116b7-43db-435a-b4b1-59447b57c611" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.966988 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb116b7-43db-435a-b4b1-59447b57c611" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.967711 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.971499 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.971871 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.972034 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.972280 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:32:07 crc kubenswrapper[4805]: I1216 12:32:07.978857 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jw42t"] Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.068315 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jw42t\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.068425 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mchr\" (UniqueName: \"kubernetes.io/projected/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-kube-api-access-4mchr\") pod \"ssh-known-hosts-edpm-deployment-jw42t\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.068475 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jw42t\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.169763 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mchr\" (UniqueName: \"kubernetes.io/projected/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-kube-api-access-4mchr\") pod \"ssh-known-hosts-edpm-deployment-jw42t\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.170126 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jw42t\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.170314 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jw42t\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.175349 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jw42t\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.184259 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jw42t\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.186330 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mchr\" (UniqueName: \"kubernetes.io/projected/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-kube-api-access-4mchr\") pod \"ssh-known-hosts-edpm-deployment-jw42t\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.298576 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:08 crc kubenswrapper[4805]: W1216 12:32:08.835582 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2af0c7ba_b7ca_40cd_9443_f9cb126211b0.slice/crio-980f53c9e7b0e604c7a435deef0e23631d15a4e9035e935b362a605f4869fec8 WatchSource:0}: Error finding container 980f53c9e7b0e604c7a435deef0e23631d15a4e9035e935b362a605f4869fec8: Status 404 returned error can't find the container with id 980f53c9e7b0e604c7a435deef0e23631d15a4e9035e935b362a605f4869fec8 Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.843592 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jw42t"] Dec 16 12:32:08 crc kubenswrapper[4805]: I1216 12:32:08.876021 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" event={"ID":"2af0c7ba-b7ca-40cd-9443-f9cb126211b0","Type":"ContainerStarted","Data":"980f53c9e7b0e604c7a435deef0e23631d15a4e9035e935b362a605f4869fec8"} Dec 16 12:32:09 crc kubenswrapper[4805]: I1216 12:32:09.888324 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" event={"ID":"2af0c7ba-b7ca-40cd-9443-f9cb126211b0","Type":"ContainerStarted","Data":"6fdab461cb109c608ea7db07243d3500aea214abf053971d20a3a71ed4d5dbe4"} Dec 16 12:32:09 crc kubenswrapper[4805]: I1216 12:32:09.909933 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" podStartSLOduration=2.344305252 podStartE2EDuration="2.909911826s" podCreationTimestamp="2025-12-16 12:32:07 +0000 UTC" firstStartedPulling="2025-12-16 12:32:08.841200552 +0000 UTC m=+2202.559458357" lastFinishedPulling="2025-12-16 12:32:09.406807126 +0000 UTC m=+2203.125064931" observedRunningTime="2025-12-16 12:32:09.902224256 +0000 UTC m=+2203.620482061" watchObservedRunningTime="2025-12-16 12:32:09.909911826 +0000 UTC m=+2203.628169641" Dec 16 12:32:17 crc kubenswrapper[4805]: I1216 12:32:17.976278 4805 generic.go:334] "Generic (PLEG): container finished" podID="2af0c7ba-b7ca-40cd-9443-f9cb126211b0" containerID="6fdab461cb109c608ea7db07243d3500aea214abf053971d20a3a71ed4d5dbe4" exitCode=0 Dec 16 12:32:17 crc kubenswrapper[4805]: I1216 12:32:17.976376 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" event={"ID":"2af0c7ba-b7ca-40cd-9443-f9cb126211b0","Type":"ContainerDied","Data":"6fdab461cb109c608ea7db07243d3500aea214abf053971d20a3a71ed4d5dbe4"} Dec 16 12:32:19 crc kubenswrapper[4805]: I1216 12:32:19.542267 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:19 crc kubenswrapper[4805]: I1216 12:32:19.575660 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-ssh-key-openstack-edpm-ipam\") pod \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " Dec 16 12:32:19 crc kubenswrapper[4805]: I1216 12:32:19.576004 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mchr\" (UniqueName: \"kubernetes.io/projected/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-kube-api-access-4mchr\") pod \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " Dec 16 12:32:19 crc kubenswrapper[4805]: I1216 12:32:19.576052 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-inventory-0\") pod \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\" (UID: \"2af0c7ba-b7ca-40cd-9443-f9cb126211b0\") " Dec 16 12:32:19 crc kubenswrapper[4805]: I1216 12:32:19.582358 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-kube-api-access-4mchr" (OuterVolumeSpecName: "kube-api-access-4mchr") pod "2af0c7ba-b7ca-40cd-9443-f9cb126211b0" (UID: "2af0c7ba-b7ca-40cd-9443-f9cb126211b0"). InnerVolumeSpecName "kube-api-access-4mchr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:32:19 crc kubenswrapper[4805]: I1216 12:32:19.617959 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2af0c7ba-b7ca-40cd-9443-f9cb126211b0" (UID: "2af0c7ba-b7ca-40cd-9443-f9cb126211b0"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:32:19 crc kubenswrapper[4805]: I1216 12:32:19.618486 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2af0c7ba-b7ca-40cd-9443-f9cb126211b0" (UID: "2af0c7ba-b7ca-40cd-9443-f9cb126211b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:32:19 crc kubenswrapper[4805]: I1216 12:32:19.678248 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mchr\" (UniqueName: \"kubernetes.io/projected/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-kube-api-access-4mchr\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:19 crc kubenswrapper[4805]: I1216 12:32:19.678287 4805 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:19 crc kubenswrapper[4805]: I1216 12:32:19.678314 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2af0c7ba-b7ca-40cd-9443-f9cb126211b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:19.998288 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" event={"ID":"2af0c7ba-b7ca-40cd-9443-f9cb126211b0","Type":"ContainerDied","Data":"980f53c9e7b0e604c7a435deef0e23631d15a4e9035e935b362a605f4869fec8"} Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:19.998337 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="980f53c9e7b0e604c7a435deef0e23631d15a4e9035e935b362a605f4869fec8" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:19.998337 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jw42t" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.087256 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr"] Dec 16 12:32:20 crc kubenswrapper[4805]: E1216 12:32:20.087906 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af0c7ba-b7ca-40cd-9443-f9cb126211b0" containerName="ssh-known-hosts-edpm-deployment" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.087929 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af0c7ba-b7ca-40cd-9443-f9cb126211b0" containerName="ssh-known-hosts-edpm-deployment" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.088168 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af0c7ba-b7ca-40cd-9443-f9cb126211b0" containerName="ssh-known-hosts-edpm-deployment" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.088867 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.091015 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.092550 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.092788 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.095848 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.100010 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr"] Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.185949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s5wgr\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.186008 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km2vk\" (UniqueName: \"kubernetes.io/projected/3c058268-e16d-417e-8375-014b2cd1d3a5-kube-api-access-km2vk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s5wgr\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.186107 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s5wgr\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.288438 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s5wgr\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.288540 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s5wgr\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.288575 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km2vk\" (UniqueName: \"kubernetes.io/projected/3c058268-e16d-417e-8375-014b2cd1d3a5-kube-api-access-km2vk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s5wgr\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.293767 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s5wgr\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.301017 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s5wgr\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.322416 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km2vk\" (UniqueName: \"kubernetes.io/projected/3c058268-e16d-417e-8375-014b2cd1d3a5-kube-api-access-km2vk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s5wgr\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:20 crc kubenswrapper[4805]: I1216 12:32:20.416483 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:20 crc kubenswrapper[4805]: W1216 12:32:20.992990 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c058268_e16d_417e_8375_014b2cd1d3a5.slice/crio-057ec094cd4eeb46e2ad7f729cabada9c175abe70bbf329ce1a47dfdf495079d WatchSource:0}: Error finding container 057ec094cd4eeb46e2ad7f729cabada9c175abe70bbf329ce1a47dfdf495079d: Status 404 returned error can't find the container with id 057ec094cd4eeb46e2ad7f729cabada9c175abe70bbf329ce1a47dfdf495079d Dec 16 12:32:21 crc kubenswrapper[4805]: I1216 12:32:21.006784 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr"] Dec 16 12:32:21 crc kubenswrapper[4805]: I1216 12:32:21.030828 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" event={"ID":"3c058268-e16d-417e-8375-014b2cd1d3a5","Type":"ContainerStarted","Data":"057ec094cd4eeb46e2ad7f729cabada9c175abe70bbf329ce1a47dfdf495079d"} Dec 16 12:32:22 crc kubenswrapper[4805]: I1216 12:32:22.042484 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" event={"ID":"3c058268-e16d-417e-8375-014b2cd1d3a5","Type":"ContainerStarted","Data":"1584cd768baa76ab2afd8bf7c81065562fc19e32c45dcd7a2a9b088f607de9db"} Dec 16 12:32:22 crc kubenswrapper[4805]: I1216 12:32:22.063770 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" podStartSLOduration=1.5833853709999999 podStartE2EDuration="2.063749681s" podCreationTimestamp="2025-12-16 12:32:20 +0000 UTC" firstStartedPulling="2025-12-16 12:32:21.004287229 +0000 UTC m=+2214.722545034" lastFinishedPulling="2025-12-16 12:32:21.484651539 +0000 UTC m=+2215.202909344" observedRunningTime="2025-12-16 12:32:22.05951154 +0000 UTC m=+2215.777769345" watchObservedRunningTime="2025-12-16 12:32:22.063749681 +0000 UTC m=+2215.782007496" Dec 16 12:32:27 crc kubenswrapper[4805]: I1216 12:32:27.072010 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:32:27 crc kubenswrapper[4805]: I1216 12:32:27.072667 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:32:31 crc kubenswrapper[4805]: I1216 12:32:31.135897 4805 generic.go:334] "Generic (PLEG): container finished" podID="3c058268-e16d-417e-8375-014b2cd1d3a5" containerID="1584cd768baa76ab2afd8bf7c81065562fc19e32c45dcd7a2a9b088f607de9db" exitCode=0 Dec 16 12:32:31 crc kubenswrapper[4805]: I1216 12:32:31.136019 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" event={"ID":"3c058268-e16d-417e-8375-014b2cd1d3a5","Type":"ContainerDied","Data":"1584cd768baa76ab2afd8bf7c81065562fc19e32c45dcd7a2a9b088f607de9db"} Dec 16 12:32:32 crc kubenswrapper[4805]: I1216 12:32:32.578831 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:32 crc kubenswrapper[4805]: I1216 12:32:32.745287 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-inventory\") pod \"3c058268-e16d-417e-8375-014b2cd1d3a5\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " Dec 16 12:32:32 crc kubenswrapper[4805]: I1216 12:32:32.745793 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km2vk\" (UniqueName: \"kubernetes.io/projected/3c058268-e16d-417e-8375-014b2cd1d3a5-kube-api-access-km2vk\") pod \"3c058268-e16d-417e-8375-014b2cd1d3a5\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " Dec 16 12:32:32 crc kubenswrapper[4805]: I1216 12:32:32.745840 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-ssh-key\") pod \"3c058268-e16d-417e-8375-014b2cd1d3a5\" (UID: \"3c058268-e16d-417e-8375-014b2cd1d3a5\") " Dec 16 12:32:32 crc kubenswrapper[4805]: I1216 12:32:32.755075 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c058268-e16d-417e-8375-014b2cd1d3a5-kube-api-access-km2vk" (OuterVolumeSpecName: "kube-api-access-km2vk") pod "3c058268-e16d-417e-8375-014b2cd1d3a5" (UID: "3c058268-e16d-417e-8375-014b2cd1d3a5"). InnerVolumeSpecName "kube-api-access-km2vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:32:32 crc kubenswrapper[4805]: I1216 12:32:32.777416 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-inventory" (OuterVolumeSpecName: "inventory") pod "3c058268-e16d-417e-8375-014b2cd1d3a5" (UID: "3c058268-e16d-417e-8375-014b2cd1d3a5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:32:32 crc kubenswrapper[4805]: I1216 12:32:32.779341 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3c058268-e16d-417e-8375-014b2cd1d3a5" (UID: "3c058268-e16d-417e-8375-014b2cd1d3a5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:32:32 crc kubenswrapper[4805]: I1216 12:32:32.849913 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:32 crc kubenswrapper[4805]: I1216 12:32:32.849959 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km2vk\" (UniqueName: \"kubernetes.io/projected/3c058268-e16d-417e-8375-014b2cd1d3a5-kube-api-access-km2vk\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:32 crc kubenswrapper[4805]: I1216 12:32:32.849973 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c058268-e16d-417e-8375-014b2cd1d3a5-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.154308 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" event={"ID":"3c058268-e16d-417e-8375-014b2cd1d3a5","Type":"ContainerDied","Data":"057ec094cd4eeb46e2ad7f729cabada9c175abe70bbf329ce1a47dfdf495079d"} Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.154346 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057ec094cd4eeb46e2ad7f729cabada9c175abe70bbf329ce1a47dfdf495079d" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.154396 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s5wgr" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.271420 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4"] Dec 16 12:32:33 crc kubenswrapper[4805]: E1216 12:32:33.272366 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c058268-e16d-417e-8375-014b2cd1d3a5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.272389 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c058268-e16d-417e-8375-014b2cd1d3a5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.272684 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c058268-e16d-417e-8375-014b2cd1d3a5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.274049 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.279014 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.279950 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.280106 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.280183 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.286129 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4"] Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.462388 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.462492 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.462517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcjgf\" (UniqueName: \"kubernetes.io/projected/de881c10-2738-4b4a-9d44-8397ba3fc6b7-kube-api-access-lcjgf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.564358 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.564468 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.564494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcjgf\" (UniqueName: \"kubernetes.io/projected/de881c10-2738-4b4a-9d44-8397ba3fc6b7-kube-api-access-lcjgf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.571942 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.572419 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.587578 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcjgf\" (UniqueName: \"kubernetes.io/projected/de881c10-2738-4b4a-9d44-8397ba3fc6b7-kube-api-access-lcjgf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:33 crc kubenswrapper[4805]: I1216 12:32:33.609255 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:34 crc kubenswrapper[4805]: I1216 12:32:34.201213 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4"] Dec 16 12:32:35 crc kubenswrapper[4805]: I1216 12:32:35.184526 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" event={"ID":"de881c10-2738-4b4a-9d44-8397ba3fc6b7","Type":"ContainerStarted","Data":"388312570a545041281b3e2e61f08495ba0fee5dc69b02bf6790d112663eaf3a"} Dec 16 12:32:36 crc kubenswrapper[4805]: I1216 12:32:36.197244 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" event={"ID":"de881c10-2738-4b4a-9d44-8397ba3fc6b7","Type":"ContainerStarted","Data":"cb79c907cfec0d0db45c516641c4681240363620efbc4faa80f27614ce465f52"} Dec 16 12:32:36 crc kubenswrapper[4805]: I1216 12:32:36.221159 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" podStartSLOduration=1.7387603280000001 podStartE2EDuration="3.221123511s" podCreationTimestamp="2025-12-16 12:32:33 +0000 UTC" firstStartedPulling="2025-12-16 12:32:34.196380039 +0000 UTC m=+2227.914637844" lastFinishedPulling="2025-12-16 12:32:35.678743222 +0000 UTC m=+2229.397001027" observedRunningTime="2025-12-16 12:32:36.213824221 +0000 UTC m=+2229.932082046" watchObservedRunningTime="2025-12-16 12:32:36.221123511 +0000 UTC m=+2229.939381326" Dec 16 12:32:46 crc kubenswrapper[4805]: I1216 12:32:46.295868 4805 generic.go:334] "Generic (PLEG): container finished" podID="de881c10-2738-4b4a-9d44-8397ba3fc6b7" containerID="cb79c907cfec0d0db45c516641c4681240363620efbc4faa80f27614ce465f52" exitCode=0 Dec 16 12:32:46 crc kubenswrapper[4805]: I1216 12:32:46.295983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" event={"ID":"de881c10-2738-4b4a-9d44-8397ba3fc6b7","Type":"ContainerDied","Data":"cb79c907cfec0d0db45c516641c4681240363620efbc4faa80f27614ce465f52"} Dec 16 12:32:47 crc kubenswrapper[4805]: I1216 12:32:47.736499 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:47 crc kubenswrapper[4805]: I1216 12:32:47.864748 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-inventory\") pod \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " Dec 16 12:32:47 crc kubenswrapper[4805]: I1216 12:32:47.865068 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcjgf\" (UniqueName: \"kubernetes.io/projected/de881c10-2738-4b4a-9d44-8397ba3fc6b7-kube-api-access-lcjgf\") pod \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " Dec 16 12:32:47 crc kubenswrapper[4805]: I1216 12:32:47.865339 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-ssh-key\") pod \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\" (UID: \"de881c10-2738-4b4a-9d44-8397ba3fc6b7\") " Dec 16 12:32:47 crc kubenswrapper[4805]: I1216 12:32:47.870788 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de881c10-2738-4b4a-9d44-8397ba3fc6b7-kube-api-access-lcjgf" (OuterVolumeSpecName: "kube-api-access-lcjgf") pod "de881c10-2738-4b4a-9d44-8397ba3fc6b7" (UID: "de881c10-2738-4b4a-9d44-8397ba3fc6b7"). InnerVolumeSpecName "kube-api-access-lcjgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:32:47 crc kubenswrapper[4805]: I1216 12:32:47.894484 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "de881c10-2738-4b4a-9d44-8397ba3fc6b7" (UID: "de881c10-2738-4b4a-9d44-8397ba3fc6b7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:32:47 crc kubenswrapper[4805]: I1216 12:32:47.896285 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-inventory" (OuterVolumeSpecName: "inventory") pod "de881c10-2738-4b4a-9d44-8397ba3fc6b7" (UID: "de881c10-2738-4b4a-9d44-8397ba3fc6b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:32:47 crc kubenswrapper[4805]: I1216 12:32:47.968954 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:47 crc kubenswrapper[4805]: I1216 12:32:47.969248 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de881c10-2738-4b4a-9d44-8397ba3fc6b7-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:47 crc kubenswrapper[4805]: I1216 12:32:47.970122 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcjgf\" (UniqueName: \"kubernetes.io/projected/de881c10-2738-4b4a-9d44-8397ba3fc6b7-kube-api-access-lcjgf\") on node \"crc\" DevicePath \"\"" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.318647 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" event={"ID":"de881c10-2738-4b4a-9d44-8397ba3fc6b7","Type":"ContainerDied","Data":"388312570a545041281b3e2e61f08495ba0fee5dc69b02bf6790d112663eaf3a"} Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.318712 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388312570a545041281b3e2e61f08495ba0fee5dc69b02bf6790d112663eaf3a" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.318732 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.416046 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2"] Dec 16 12:32:48 crc kubenswrapper[4805]: E1216 12:32:48.416513 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de881c10-2738-4b4a-9d44-8397ba3fc6b7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.416533 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="de881c10-2738-4b4a-9d44-8397ba3fc6b7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.416748 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="de881c10-2738-4b4a-9d44-8397ba3fc6b7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.417444 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.430859 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.431093 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.432196 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2"] Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.432650 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.432671 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.432726 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.432868 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.433078 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.433773 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582030 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582072 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582110 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqpvj\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-kube-api-access-gqpvj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582158 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582192 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582208 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582228 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582261 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582296 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582344 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582370 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582395 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.582446 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.684195 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.684307 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.684338 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.684397 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqpvj\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-kube-api-access-gqpvj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.684958 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.685045 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.685095 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.685130 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.685192 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.685246 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.685280 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.685328 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.685362 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.685423 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.696011 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.696391 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.696470 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.696597 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.696776 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.696933 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.697013 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.698531 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.700923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.701384 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.703114 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.708118 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.711104 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.714918 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqpvj\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-kube-api-access-gqpvj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:48 crc kubenswrapper[4805]: I1216 12:32:48.739243 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:32:49 crc kubenswrapper[4805]: I1216 12:32:49.317356 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2"] Dec 16 12:32:49 crc kubenswrapper[4805]: I1216 12:32:49.346516 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" event={"ID":"cb170dbb-2c7a-417a-8254-849165c08ef4","Type":"ContainerStarted","Data":"7c3584c69000898b6e25bc320bc57724d2f65c20696453d398e9ace0b7445b85"} Dec 16 12:32:51 crc kubenswrapper[4805]: I1216 12:32:51.366206 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" event={"ID":"cb170dbb-2c7a-417a-8254-849165c08ef4","Type":"ContainerStarted","Data":"1856c4573dc97d6555ee49d4c17e0617f4a755cf5a8a27dc14b702e09c291cef"} Dec 16 12:32:51 crc kubenswrapper[4805]: I1216 12:32:51.409667 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" podStartSLOduration=2.441273161 podStartE2EDuration="3.40964554s" podCreationTimestamp="2025-12-16 12:32:48 +0000 UTC" firstStartedPulling="2025-12-16 12:32:49.339273929 +0000 UTC m=+2243.057531734" lastFinishedPulling="2025-12-16 12:32:50.307646308 +0000 UTC m=+2244.025904113" observedRunningTime="2025-12-16 12:32:51.403997688 +0000 UTC m=+2245.122255503" watchObservedRunningTime="2025-12-16 12:32:51.40964554 +0000 UTC m=+2245.127903355" Dec 16 12:32:57 crc kubenswrapper[4805]: I1216 12:32:57.071964 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:32:57 crc kubenswrapper[4805]: I1216 12:32:57.072533 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:33:27 crc kubenswrapper[4805]: I1216 12:33:27.071316 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:33:27 crc kubenswrapper[4805]: I1216 12:33:27.071819 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:33:27 crc kubenswrapper[4805]: I1216 12:33:27.071871 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:33:27 crc kubenswrapper[4805]: I1216 12:33:27.072736 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:33:27 crc kubenswrapper[4805]: I1216 12:33:27.072827 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" gracePeriod=600 Dec 16 12:33:27 crc kubenswrapper[4805]: E1216 12:33:27.210092 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:33:27 crc kubenswrapper[4805]: I1216 12:33:27.743752 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" exitCode=0 Dec 16 12:33:27 crc kubenswrapper[4805]: I1216 12:33:27.743804 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b"} Dec 16 12:33:27 crc kubenswrapper[4805]: I1216 12:33:27.743843 4805 scope.go:117] "RemoveContainer" containerID="956ad6f3f2307c0a334de402c87f8d9140bbfe291ff841eaa18f652f0b7f65f8" Dec 16 12:33:27 crc kubenswrapper[4805]: I1216 12:33:27.744391 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:33:27 crc kubenswrapper[4805]: E1216 12:33:27.744703 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:33:29 crc kubenswrapper[4805]: I1216 12:33:29.772613 4805 generic.go:334] "Generic (PLEG): container finished" podID="cb170dbb-2c7a-417a-8254-849165c08ef4" containerID="1856c4573dc97d6555ee49d4c17e0617f4a755cf5a8a27dc14b702e09c291cef" exitCode=0 Dec 16 12:33:29 crc kubenswrapper[4805]: I1216 12:33:29.772653 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" event={"ID":"cb170dbb-2c7a-417a-8254-849165c08ef4","Type":"ContainerDied","Data":"1856c4573dc97d6555ee49d4c17e0617f4a755cf5a8a27dc14b702e09c291cef"} Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.255935 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.432779 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.432860 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-bootstrap-combined-ca-bundle\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.432889 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.432908 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-libvirt-combined-ca-bundle\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.432946 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-neutron-metadata-combined-ca-bundle\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.432964 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-telemetry-combined-ca-bundle\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.432994 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ssh-key\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.433069 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-repo-setup-combined-ca-bundle\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.433154 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-nova-combined-ca-bundle\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.433220 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqpvj\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-kube-api-access-gqpvj\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.433247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.433288 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-inventory\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.433375 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ovn-combined-ca-bundle\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.433414 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"cb170dbb-2c7a-417a-8254-849165c08ef4\" (UID: \"cb170dbb-2c7a-417a-8254-849165c08ef4\") " Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.442715 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.442780 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.442898 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.444496 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.445198 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.445451 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.446482 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-kube-api-access-gqpvj" (OuterVolumeSpecName: "kube-api-access-gqpvj") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "kube-api-access-gqpvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.447373 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.448192 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.448838 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.449126 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.460867 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.468586 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.468795 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-inventory" (OuterVolumeSpecName: "inventory") pod "cb170dbb-2c7a-417a-8254-849165c08ef4" (UID: "cb170dbb-2c7a-417a-8254-849165c08ef4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.535775 4805 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.535861 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqpvj\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-kube-api-access-gqpvj\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.535878 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.535941 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.535958 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.535972 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.535985 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.535998 4805 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.536011 4805 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.536023 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cb170dbb-2c7a-417a-8254-849165c08ef4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.536036 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.536048 4805 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.536058 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.536068 4805 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb170dbb-2c7a-417a-8254-849165c08ef4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.793489 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" event={"ID":"cb170dbb-2c7a-417a-8254-849165c08ef4","Type":"ContainerDied","Data":"7c3584c69000898b6e25bc320bc57724d2f65c20696453d398e9ace0b7445b85"} Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.793606 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c3584c69000898b6e25bc320bc57724d2f65c20696453d398e9ace0b7445b85" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.793577 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.942670 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl"] Dec 16 12:33:31 crc kubenswrapper[4805]: E1216 12:33:31.943397 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb170dbb-2c7a-417a-8254-849165c08ef4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.943445 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb170dbb-2c7a-417a-8254-849165c08ef4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.943700 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb170dbb-2c7a-417a-8254-849165c08ef4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.944671 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.951076 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.951358 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.951509 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.951780 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.951793 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:33:31 crc kubenswrapper[4805]: I1216 12:33:31.954841 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl"] Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.146969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.147075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6cf38957-b778-49fe-9dd0-c629e23fb773-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.147116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.147299 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ndm6\" (UniqueName: \"kubernetes.io/projected/6cf38957-b778-49fe-9dd0-c629e23fb773-kube-api-access-2ndm6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.147371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.248480 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.248594 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6cf38957-b778-49fe-9dd0-c629e23fb773-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.248631 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.248701 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ndm6\" (UniqueName: \"kubernetes.io/projected/6cf38957-b778-49fe-9dd0-c629e23fb773-kube-api-access-2ndm6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.248765 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.249772 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6cf38957-b778-49fe-9dd0-c629e23fb773-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.252654 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.252890 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.253663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.267397 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ndm6\" (UniqueName: \"kubernetes.io/projected/6cf38957-b778-49fe-9dd0-c629e23fb773-kube-api-access-2ndm6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-n2stl\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.271368 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:33:32 crc kubenswrapper[4805]: I1216 12:33:32.847487 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl"] Dec 16 12:33:33 crc kubenswrapper[4805]: I1216 12:33:33.843988 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" event={"ID":"6cf38957-b778-49fe-9dd0-c629e23fb773","Type":"ContainerStarted","Data":"e0cafc22c4fc28dc98cef0953c99b35afbbda89c1fc8ffa7acb4a187dd58208a"} Dec 16 12:33:35 crc kubenswrapper[4805]: I1216 12:33:35.864245 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" event={"ID":"6cf38957-b778-49fe-9dd0-c629e23fb773","Type":"ContainerStarted","Data":"b9d333b51c3222886c290db3f465e51bfb92f60473ed0c8cb07b26c4d49bbc47"} Dec 16 12:33:41 crc kubenswrapper[4805]: I1216 12:33:41.523007 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:33:41 crc kubenswrapper[4805]: E1216 12:33:41.523707 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:33:55 crc kubenswrapper[4805]: I1216 12:33:55.523305 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:33:55 crc kubenswrapper[4805]: E1216 12:33:55.525402 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:34:01 crc kubenswrapper[4805]: E1216 12:34:01.905430 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Dec 16 12:34:06 crc kubenswrapper[4805]: I1216 12:34:06.528822 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:34:06 crc kubenswrapper[4805]: E1216 12:34:06.529457 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.108418 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" podStartSLOduration=39.107065565 podStartE2EDuration="41.108395476s" podCreationTimestamp="2025-12-16 12:33:31 +0000 UTC" firstStartedPulling="2025-12-16 12:33:32.864644058 +0000 UTC m=+2286.582901873" lastFinishedPulling="2025-12-16 12:33:34.865973979 +0000 UTC m=+2288.584231784" observedRunningTime="2025-12-16 12:33:35.882365385 +0000 UTC m=+2289.600623210" watchObservedRunningTime="2025-12-16 12:34:12.108395476 +0000 UTC m=+2325.826653291" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.123276 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cqgcs"] Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.125398 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.147642 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqgcs"] Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.181595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrnf6\" (UniqueName: \"kubernetes.io/projected/b2e2559b-85ef-43d4-8c14-4aa510a5132c-kube-api-access-mrnf6\") pod \"certified-operators-cqgcs\" (UID: \"b2e2559b-85ef-43d4-8c14-4aa510a5132c\") " pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.181668 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2e2559b-85ef-43d4-8c14-4aa510a5132c-utilities\") pod \"certified-operators-cqgcs\" (UID: \"b2e2559b-85ef-43d4-8c14-4aa510a5132c\") " pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.181702 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2e2559b-85ef-43d4-8c14-4aa510a5132c-catalog-content\") pod \"certified-operators-cqgcs\" (UID: \"b2e2559b-85ef-43d4-8c14-4aa510a5132c\") " pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.283378 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrnf6\" (UniqueName: \"kubernetes.io/projected/b2e2559b-85ef-43d4-8c14-4aa510a5132c-kube-api-access-mrnf6\") pod \"certified-operators-cqgcs\" (UID: \"b2e2559b-85ef-43d4-8c14-4aa510a5132c\") " pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.283671 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2e2559b-85ef-43d4-8c14-4aa510a5132c-utilities\") pod \"certified-operators-cqgcs\" (UID: \"b2e2559b-85ef-43d4-8c14-4aa510a5132c\") " pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.283702 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2e2559b-85ef-43d4-8c14-4aa510a5132c-catalog-content\") pod \"certified-operators-cqgcs\" (UID: \"b2e2559b-85ef-43d4-8c14-4aa510a5132c\") " pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.284225 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2e2559b-85ef-43d4-8c14-4aa510a5132c-utilities\") pod \"certified-operators-cqgcs\" (UID: \"b2e2559b-85ef-43d4-8c14-4aa510a5132c\") " pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.284257 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2e2559b-85ef-43d4-8c14-4aa510a5132c-catalog-content\") pod \"certified-operators-cqgcs\" (UID: \"b2e2559b-85ef-43d4-8c14-4aa510a5132c\") " pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.311651 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrnf6\" (UniqueName: \"kubernetes.io/projected/b2e2559b-85ef-43d4-8c14-4aa510a5132c-kube-api-access-mrnf6\") pod \"certified-operators-cqgcs\" (UID: \"b2e2559b-85ef-43d4-8c14-4aa510a5132c\") " pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:12 crc kubenswrapper[4805]: I1216 12:34:12.539100 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:13 crc kubenswrapper[4805]: I1216 12:34:13.229374 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqgcs"] Dec 16 12:34:13 crc kubenswrapper[4805]: I1216 12:34:13.289269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqgcs" event={"ID":"b2e2559b-85ef-43d4-8c14-4aa510a5132c","Type":"ContainerStarted","Data":"4e9d295aabe3214831c874b75d04623225284ff145aa796f9913d03d8ed1475a"} Dec 16 12:34:14 crc kubenswrapper[4805]: I1216 12:34:14.300454 4805 generic.go:334] "Generic (PLEG): container finished" podID="b2e2559b-85ef-43d4-8c14-4aa510a5132c" containerID="dbe962780370c65b7bbf61debeeb35dd48dd4d89283c7c3131e5fbbfdd7c4a5d" exitCode=0 Dec 16 12:34:14 crc kubenswrapper[4805]: I1216 12:34:14.300572 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqgcs" event={"ID":"b2e2559b-85ef-43d4-8c14-4aa510a5132c","Type":"ContainerDied","Data":"dbe962780370c65b7bbf61debeeb35dd48dd4d89283c7c3131e5fbbfdd7c4a5d"} Dec 16 12:34:14 crc kubenswrapper[4805]: I1216 12:34:14.304202 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 12:34:20 crc kubenswrapper[4805]: I1216 12:34:20.524373 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:34:20 crc kubenswrapper[4805]: E1216 12:34:20.525171 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:34:22 crc kubenswrapper[4805]: I1216 12:34:22.396533 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqgcs" event={"ID":"b2e2559b-85ef-43d4-8c14-4aa510a5132c","Type":"ContainerStarted","Data":"da7b9c2344ffcbafa657ac3060c1a1b4e92c1d692520bc7ff3f4b468ea3e4bac"} Dec 16 12:34:23 crc kubenswrapper[4805]: I1216 12:34:23.407512 4805 generic.go:334] "Generic (PLEG): container finished" podID="b2e2559b-85ef-43d4-8c14-4aa510a5132c" containerID="da7b9c2344ffcbafa657ac3060c1a1b4e92c1d692520bc7ff3f4b468ea3e4bac" exitCode=0 Dec 16 12:34:23 crc kubenswrapper[4805]: I1216 12:34:23.407626 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqgcs" event={"ID":"b2e2559b-85ef-43d4-8c14-4aa510a5132c","Type":"ContainerDied","Data":"da7b9c2344ffcbafa657ac3060c1a1b4e92c1d692520bc7ff3f4b468ea3e4bac"} Dec 16 12:34:25 crc kubenswrapper[4805]: I1216 12:34:25.433785 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqgcs" event={"ID":"b2e2559b-85ef-43d4-8c14-4aa510a5132c","Type":"ContainerStarted","Data":"b4f81774fd39ac8f8bc36b74e21ec9427de9726812974184ea184c15cae28705"} Dec 16 12:34:25 crc kubenswrapper[4805]: I1216 12:34:25.460087 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cqgcs" podStartSLOduration=3.465037325 podStartE2EDuration="13.460062012s" podCreationTimestamp="2025-12-16 12:34:12 +0000 UTC" firstStartedPulling="2025-12-16 12:34:14.303817065 +0000 UTC m=+2328.022074880" lastFinishedPulling="2025-12-16 12:34:24.298841762 +0000 UTC m=+2338.017099567" observedRunningTime="2025-12-16 12:34:25.453373001 +0000 UTC m=+2339.171630806" watchObservedRunningTime="2025-12-16 12:34:25.460062012 +0000 UTC m=+2339.178319827" Dec 16 12:34:32 crc kubenswrapper[4805]: I1216 12:34:32.540712 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:32 crc kubenswrapper[4805]: I1216 12:34:32.540994 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:32 crc kubenswrapper[4805]: I1216 12:34:32.597616 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:33 crc kubenswrapper[4805]: I1216 12:34:33.557401 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cqgcs" Dec 16 12:34:33 crc kubenswrapper[4805]: I1216 12:34:33.638098 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqgcs"] Dec 16 12:34:33 crc kubenswrapper[4805]: I1216 12:34:33.686433 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5wt9"] Dec 16 12:34:33 crc kubenswrapper[4805]: I1216 12:34:33.686671 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f5wt9" podUID="41271b36-917c-4c75-b884-eacf365001cf" containerName="registry-server" containerID="cri-o://0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb" gracePeriod=2 Dec 16 12:34:34 crc kubenswrapper[4805]: I1216 12:34:34.523038 4805 generic.go:334] "Generic (PLEG): container finished" podID="41271b36-917c-4c75-b884-eacf365001cf" containerID="0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb" exitCode=0 Dec 16 12:34:34 crc kubenswrapper[4805]: I1216 12:34:34.535464 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5wt9" event={"ID":"41271b36-917c-4c75-b884-eacf365001cf","Type":"ContainerDied","Data":"0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb"} Dec 16 12:34:34 crc kubenswrapper[4805]: E1216 12:34:34.614555 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb is running failed: container process not found" containerID="0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:34:34 crc kubenswrapper[4805]: E1216 12:34:34.615071 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb is running failed: container process not found" containerID="0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:34:34 crc kubenswrapper[4805]: E1216 12:34:34.616110 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb is running failed: container process not found" containerID="0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb" cmd=["grpc_health_probe","-addr=:50051"] Dec 16 12:34:34 crc kubenswrapper[4805]: E1216 12:34:34.616184 4805 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-f5wt9" podUID="41271b36-917c-4c75-b884-eacf365001cf" containerName="registry-server" Dec 16 12:34:34 crc kubenswrapper[4805]: I1216 12:34:34.859662 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:34:34 crc kubenswrapper[4805]: I1216 12:34:34.988788 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngwkm\" (UniqueName: \"kubernetes.io/projected/41271b36-917c-4c75-b884-eacf365001cf-kube-api-access-ngwkm\") pod \"41271b36-917c-4c75-b884-eacf365001cf\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " Dec 16 12:34:34 crc kubenswrapper[4805]: I1216 12:34:34.988924 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-utilities\") pod \"41271b36-917c-4c75-b884-eacf365001cf\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " Dec 16 12:34:34 crc kubenswrapper[4805]: I1216 12:34:34.988950 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-catalog-content\") pod \"41271b36-917c-4c75-b884-eacf365001cf\" (UID: \"41271b36-917c-4c75-b884-eacf365001cf\") " Dec 16 12:34:34 crc kubenswrapper[4805]: I1216 12:34:34.989931 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-utilities" (OuterVolumeSpecName: "utilities") pod "41271b36-917c-4c75-b884-eacf365001cf" (UID: "41271b36-917c-4c75-b884-eacf365001cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.009826 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41271b36-917c-4c75-b884-eacf365001cf-kube-api-access-ngwkm" (OuterVolumeSpecName: "kube-api-access-ngwkm") pod "41271b36-917c-4c75-b884-eacf365001cf" (UID: "41271b36-917c-4c75-b884-eacf365001cf"). InnerVolumeSpecName "kube-api-access-ngwkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.076291 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41271b36-917c-4c75-b884-eacf365001cf" (UID: "41271b36-917c-4c75-b884-eacf365001cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.091449 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngwkm\" (UniqueName: \"kubernetes.io/projected/41271b36-917c-4c75-b884-eacf365001cf-kube-api-access-ngwkm\") on node \"crc\" DevicePath \"\"" Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.091487 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.091503 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41271b36-917c-4c75-b884-eacf365001cf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.522242 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:34:35 crc kubenswrapper[4805]: E1216 12:34:35.522592 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.533268 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5wt9" Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.534404 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5wt9" event={"ID":"41271b36-917c-4c75-b884-eacf365001cf","Type":"ContainerDied","Data":"c9b8fd790d54b8b39f0f6ec63024f82b15d2813a66f612036ade29f0950046e3"} Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.534485 4805 scope.go:117] "RemoveContainer" containerID="0133204a023f435a73a2a108a51066a83490191bd79fb72ffdeb9179ffba9abb" Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.597182 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5wt9"] Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.601345 4805 scope.go:117] "RemoveContainer" containerID="c3287245f0a97af9ad430c489aab36f8b79008d5f328e8361e83ae6d40a4e208" Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.618627 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f5wt9"] Dec 16 12:34:35 crc kubenswrapper[4805]: I1216 12:34:35.639299 4805 scope.go:117] "RemoveContainer" containerID="dad0fb75d64b3a6aa01a07c454aef9f2e3dca4cb731462e0ea592942ebda982c" Dec 16 12:34:36 crc kubenswrapper[4805]: I1216 12:34:36.535316 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41271b36-917c-4c75-b884-eacf365001cf" path="/var/lib/kubelet/pods/41271b36-917c-4c75-b884-eacf365001cf/volumes" Dec 16 12:34:46 crc kubenswrapper[4805]: I1216 12:34:46.640578 4805 generic.go:334] "Generic (PLEG): container finished" podID="6cf38957-b778-49fe-9dd0-c629e23fb773" containerID="b9d333b51c3222886c290db3f465e51bfb92f60473ed0c8cb07b26c4d49bbc47" exitCode=0 Dec 16 12:34:46 crc kubenswrapper[4805]: I1216 12:34:46.641013 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" event={"ID":"6cf38957-b778-49fe-9dd0-c629e23fb773","Type":"ContainerDied","Data":"b9d333b51c3222886c290db3f465e51bfb92f60473ed0c8cb07b26c4d49bbc47"} Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.144532 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.304698 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6cf38957-b778-49fe-9dd0-c629e23fb773-ovncontroller-config-0\") pod \"6cf38957-b778-49fe-9dd0-c629e23fb773\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.304797 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ndm6\" (UniqueName: \"kubernetes.io/projected/6cf38957-b778-49fe-9dd0-c629e23fb773-kube-api-access-2ndm6\") pod \"6cf38957-b778-49fe-9dd0-c629e23fb773\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.304898 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ssh-key\") pod \"6cf38957-b778-49fe-9dd0-c629e23fb773\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.305022 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ovn-combined-ca-bundle\") pod \"6cf38957-b778-49fe-9dd0-c629e23fb773\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.305156 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-inventory\") pod \"6cf38957-b778-49fe-9dd0-c629e23fb773\" (UID: \"6cf38957-b778-49fe-9dd0-c629e23fb773\") " Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.309996 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "6cf38957-b778-49fe-9dd0-c629e23fb773" (UID: "6cf38957-b778-49fe-9dd0-c629e23fb773"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.310725 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf38957-b778-49fe-9dd0-c629e23fb773-kube-api-access-2ndm6" (OuterVolumeSpecName: "kube-api-access-2ndm6") pod "6cf38957-b778-49fe-9dd0-c629e23fb773" (UID: "6cf38957-b778-49fe-9dd0-c629e23fb773"). InnerVolumeSpecName "kube-api-access-2ndm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.340428 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6cf38957-b778-49fe-9dd0-c629e23fb773" (UID: "6cf38957-b778-49fe-9dd0-c629e23fb773"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.342126 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cf38957-b778-49fe-9dd0-c629e23fb773-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "6cf38957-b778-49fe-9dd0-c629e23fb773" (UID: "6cf38957-b778-49fe-9dd0-c629e23fb773"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.345376 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-inventory" (OuterVolumeSpecName: "inventory") pod "6cf38957-b778-49fe-9dd0-c629e23fb773" (UID: "6cf38957-b778-49fe-9dd0-c629e23fb773"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.408285 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.408619 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.408753 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf38957-b778-49fe-9dd0-c629e23fb773-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.408833 4805 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6cf38957-b778-49fe-9dd0-c629e23fb773-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.408930 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ndm6\" (UniqueName: \"kubernetes.io/projected/6cf38957-b778-49fe-9dd0-c629e23fb773-kube-api-access-2ndm6\") on node \"crc\" DevicePath \"\"" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.522656 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:34:48 crc kubenswrapper[4805]: E1216 12:34:48.523023 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.660127 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" event={"ID":"6cf38957-b778-49fe-9dd0-c629e23fb773","Type":"ContainerDied","Data":"e0cafc22c4fc28dc98cef0953c99b35afbbda89c1fc8ffa7acb4a187dd58208a"} Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.660501 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0cafc22c4fc28dc98cef0953c99b35afbbda89c1fc8ffa7acb4a187dd58208a" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.660240 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-n2stl" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.778839 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g"] Dec 16 12:34:48 crc kubenswrapper[4805]: E1216 12:34:48.779699 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41271b36-917c-4c75-b884-eacf365001cf" containerName="registry-server" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.779721 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="41271b36-917c-4c75-b884-eacf365001cf" containerName="registry-server" Dec 16 12:34:48 crc kubenswrapper[4805]: E1216 12:34:48.779758 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41271b36-917c-4c75-b884-eacf365001cf" containerName="extract-content" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.779766 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="41271b36-917c-4c75-b884-eacf365001cf" containerName="extract-content" Dec 16 12:34:48 crc kubenswrapper[4805]: E1216 12:34:48.779785 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41271b36-917c-4c75-b884-eacf365001cf" containerName="extract-utilities" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.779793 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="41271b36-917c-4c75-b884-eacf365001cf" containerName="extract-utilities" Dec 16 12:34:48 crc kubenswrapper[4805]: E1216 12:34:48.779814 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf38957-b778-49fe-9dd0-c629e23fb773" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.779824 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf38957-b778-49fe-9dd0-c629e23fb773" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.780105 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cf38957-b778-49fe-9dd0-c629e23fb773" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.780168 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="41271b36-917c-4c75-b884-eacf365001cf" containerName="registry-server" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.781006 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.787551 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.787793 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.787985 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.788187 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.788356 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.788466 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.799335 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g"] Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.917835 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.917951 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.917990 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.918010 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blz4z\" (UniqueName: \"kubernetes.io/projected/f673c96e-f755-49d0-90ed-46ac92e151c2-kube-api-access-blz4z\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.918051 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:48 crc kubenswrapper[4805]: I1216 12:34:48.918130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.019401 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.019454 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blz4z\" (UniqueName: \"kubernetes.io/projected/f673c96e-f755-49d0-90ed-46ac92e151c2-kube-api-access-blz4z\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.019501 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.019890 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.020327 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.020425 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.024583 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.024767 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.027010 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.026521 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.036603 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.042891 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blz4z\" (UniqueName: \"kubernetes.io/projected/f673c96e-f755-49d0-90ed-46ac92e151c2-kube-api-access-blz4z\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.109490 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:34:49 crc kubenswrapper[4805]: I1216 12:34:49.682243 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g"] Dec 16 12:34:50 crc kubenswrapper[4805]: I1216 12:34:50.685632 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" event={"ID":"f673c96e-f755-49d0-90ed-46ac92e151c2","Type":"ContainerStarted","Data":"2fa70109d495b3466b0c6aa43575d6557ce21d3c841ba58f505c31c57b27b816"} Dec 16 12:34:50 crc kubenswrapper[4805]: I1216 12:34:50.686217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" event={"ID":"f673c96e-f755-49d0-90ed-46ac92e151c2","Type":"ContainerStarted","Data":"245f5537dbbb12bfe7fb074b801932f7c488989bbd851785b5b0c608c2b54e83"} Dec 16 12:34:50 crc kubenswrapper[4805]: I1216 12:34:50.717798 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" podStartSLOduration=2.019457635 podStartE2EDuration="2.717774327s" podCreationTimestamp="2025-12-16 12:34:48 +0000 UTC" firstStartedPulling="2025-12-16 12:34:49.688853091 +0000 UTC m=+2363.407110896" lastFinishedPulling="2025-12-16 12:34:50.387169783 +0000 UTC m=+2364.105427588" observedRunningTime="2025-12-16 12:34:50.713018081 +0000 UTC m=+2364.431275906" watchObservedRunningTime="2025-12-16 12:34:50.717774327 +0000 UTC m=+2364.436032142" Dec 16 12:35:03 crc kubenswrapper[4805]: I1216 12:35:03.523094 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:35:03 crc kubenswrapper[4805]: E1216 12:35:03.523860 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:35:17 crc kubenswrapper[4805]: I1216 12:35:17.523051 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:35:17 crc kubenswrapper[4805]: E1216 12:35:17.523930 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:35:28 crc kubenswrapper[4805]: I1216 12:35:28.589378 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:35:28 crc kubenswrapper[4805]: E1216 12:35:28.590050 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:35:41 crc kubenswrapper[4805]: I1216 12:35:41.523259 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:35:41 crc kubenswrapper[4805]: E1216 12:35:41.524128 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:35:44 crc kubenswrapper[4805]: I1216 12:35:44.333815 4805 generic.go:334] "Generic (PLEG): container finished" podID="f673c96e-f755-49d0-90ed-46ac92e151c2" containerID="2fa70109d495b3466b0c6aa43575d6557ce21d3c841ba58f505c31c57b27b816" exitCode=0 Dec 16 12:35:44 crc kubenswrapper[4805]: I1216 12:35:44.333889 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" event={"ID":"f673c96e-f755-49d0-90ed-46ac92e151c2","Type":"ContainerDied","Data":"2fa70109d495b3466b0c6aa43575d6557ce21d3c841ba58f505c31c57b27b816"} Dec 16 12:35:45 crc kubenswrapper[4805]: I1216 12:35:45.754945 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:35:45 crc kubenswrapper[4805]: I1216 12:35:45.951217 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-ssh-key\") pod \"f673c96e-f755-49d0-90ed-46ac92e151c2\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " Dec 16 12:35:45 crc kubenswrapper[4805]: I1216 12:35:45.951280 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-nova-metadata-neutron-config-0\") pod \"f673c96e-f755-49d0-90ed-46ac92e151c2\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " Dec 16 12:35:45 crc kubenswrapper[4805]: I1216 12:35:45.951343 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"f673c96e-f755-49d0-90ed-46ac92e151c2\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " Dec 16 12:35:45 crc kubenswrapper[4805]: I1216 12:35:45.951444 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-inventory\") pod \"f673c96e-f755-49d0-90ed-46ac92e151c2\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " Dec 16 12:35:45 crc kubenswrapper[4805]: I1216 12:35:45.951495 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blz4z\" (UniqueName: \"kubernetes.io/projected/f673c96e-f755-49d0-90ed-46ac92e151c2-kube-api-access-blz4z\") pod \"f673c96e-f755-49d0-90ed-46ac92e151c2\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " Dec 16 12:35:45 crc kubenswrapper[4805]: I1216 12:35:45.951653 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-metadata-combined-ca-bundle\") pod \"f673c96e-f755-49d0-90ed-46ac92e151c2\" (UID: \"f673c96e-f755-49d0-90ed-46ac92e151c2\") " Dec 16 12:35:45 crc kubenswrapper[4805]: I1216 12:35:45.959129 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f673c96e-f755-49d0-90ed-46ac92e151c2-kube-api-access-blz4z" (OuterVolumeSpecName: "kube-api-access-blz4z") pod "f673c96e-f755-49d0-90ed-46ac92e151c2" (UID: "f673c96e-f755-49d0-90ed-46ac92e151c2"). InnerVolumeSpecName "kube-api-access-blz4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:35:45 crc kubenswrapper[4805]: I1216 12:35:45.959937 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f673c96e-f755-49d0-90ed-46ac92e151c2" (UID: "f673c96e-f755-49d0-90ed-46ac92e151c2"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:35:45 crc kubenswrapper[4805]: I1216 12:35:45.993274 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f673c96e-f755-49d0-90ed-46ac92e151c2" (UID: "f673c96e-f755-49d0-90ed-46ac92e151c2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:35:45 crc kubenswrapper[4805]: I1216 12:35:45.995288 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-inventory" (OuterVolumeSpecName: "inventory") pod "f673c96e-f755-49d0-90ed-46ac92e151c2" (UID: "f673c96e-f755-49d0-90ed-46ac92e151c2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.006836 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "f673c96e-f755-49d0-90ed-46ac92e151c2" (UID: "f673c96e-f755-49d0-90ed-46ac92e151c2"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.021400 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "f673c96e-f755-49d0-90ed-46ac92e151c2" (UID: "f673c96e-f755-49d0-90ed-46ac92e151c2"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.055080 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.055120 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blz4z\" (UniqueName: \"kubernetes.io/projected/f673c96e-f755-49d0-90ed-46ac92e151c2-kube-api-access-blz4z\") on node \"crc\" DevicePath \"\"" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.055151 4805 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.055164 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.055175 4805 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.055187 4805 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f673c96e-f755-49d0-90ed-46ac92e151c2-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.354544 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" event={"ID":"f673c96e-f755-49d0-90ed-46ac92e151c2","Type":"ContainerDied","Data":"245f5537dbbb12bfe7fb074b801932f7c488989bbd851785b5b0c608c2b54e83"} Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.354590 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="245f5537dbbb12bfe7fb074b801932f7c488989bbd851785b5b0c608c2b54e83" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.354625 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.605974 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55"] Dec 16 12:35:46 crc kubenswrapper[4805]: E1216 12:35:46.606310 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f673c96e-f755-49d0-90ed-46ac92e151c2" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.606322 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f673c96e-f755-49d0-90ed-46ac92e151c2" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.606525 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f673c96e-f755-49d0-90ed-46ac92e151c2" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.607153 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.611188 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.611629 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.612114 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.612508 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.617490 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.620256 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55"] Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.678949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.679053 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.679098 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5ht9\" (UniqueName: \"kubernetes.io/projected/8b5953ad-0a78-4483-9097-2d4de5ad084e-kube-api-access-t5ht9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.679963 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.680108 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.783118 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.783297 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.783350 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5ht9\" (UniqueName: \"kubernetes.io/projected/8b5953ad-0a78-4483-9097-2d4de5ad084e-kube-api-access-t5ht9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.783683 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.783775 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.787854 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.789199 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.791266 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.794388 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.811128 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5ht9\" (UniqueName: \"kubernetes.io/projected/8b5953ad-0a78-4483-9097-2d4de5ad084e-kube-api-access-t5ht9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4ft55\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:46 crc kubenswrapper[4805]: I1216 12:35:46.930173 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:35:47 crc kubenswrapper[4805]: I1216 12:35:47.307149 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55"] Dec 16 12:35:47 crc kubenswrapper[4805]: I1216 12:35:47.371897 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" event={"ID":"8b5953ad-0a78-4483-9097-2d4de5ad084e","Type":"ContainerStarted","Data":"428e8fbb31d3415f1092cb0c68656d2d90bc95cfcb38cd8c0cd023075b8a6bb2"} Dec 16 12:35:48 crc kubenswrapper[4805]: I1216 12:35:48.386475 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" event={"ID":"8b5953ad-0a78-4483-9097-2d4de5ad084e","Type":"ContainerStarted","Data":"bdaf79ca6d116f65789de8d4953fc8ea66b4697f683d71d5fbc573144f4c687c"} Dec 16 12:35:48 crc kubenswrapper[4805]: I1216 12:35:48.410586 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" podStartSLOduration=1.9341947 podStartE2EDuration="2.410558765s" podCreationTimestamp="2025-12-16 12:35:46 +0000 UTC" firstStartedPulling="2025-12-16 12:35:47.314494154 +0000 UTC m=+2421.032751959" lastFinishedPulling="2025-12-16 12:35:47.790858219 +0000 UTC m=+2421.509116024" observedRunningTime="2025-12-16 12:35:48.408467805 +0000 UTC m=+2422.126725610" watchObservedRunningTime="2025-12-16 12:35:48.410558765 +0000 UTC m=+2422.128816590" Dec 16 12:35:55 crc kubenswrapper[4805]: I1216 12:35:55.523316 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:35:55 crc kubenswrapper[4805]: E1216 12:35:55.524327 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:36:08 crc kubenswrapper[4805]: I1216 12:36:08.523754 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:36:08 crc kubenswrapper[4805]: E1216 12:36:08.524764 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:36:20 crc kubenswrapper[4805]: I1216 12:36:20.523501 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:36:20 crc kubenswrapper[4805]: E1216 12:36:20.524284 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:36:31 crc kubenswrapper[4805]: I1216 12:36:31.523202 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:36:31 crc kubenswrapper[4805]: E1216 12:36:31.524014 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:36:43 crc kubenswrapper[4805]: I1216 12:36:43.523195 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:36:43 crc kubenswrapper[4805]: E1216 12:36:43.524114 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:36:58 crc kubenswrapper[4805]: I1216 12:36:58.522881 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:36:58 crc kubenswrapper[4805]: E1216 12:36:58.523763 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:37:13 crc kubenswrapper[4805]: I1216 12:37:13.522922 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:37:13 crc kubenswrapper[4805]: E1216 12:37:13.523715 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:37:26 crc kubenswrapper[4805]: I1216 12:37:26.529370 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:37:26 crc kubenswrapper[4805]: E1216 12:37:26.530353 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:37:37 crc kubenswrapper[4805]: I1216 12:37:37.522584 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:37:37 crc kubenswrapper[4805]: E1216 12:37:37.523332 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:37:52 crc kubenswrapper[4805]: I1216 12:37:52.523769 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:37:52 crc kubenswrapper[4805]: E1216 12:37:52.525638 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:38:04 crc kubenswrapper[4805]: I1216 12:38:04.523569 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:38:04 crc kubenswrapper[4805]: E1216 12:38:04.524338 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:38:15 crc kubenswrapper[4805]: I1216 12:38:15.523235 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:38:15 crc kubenswrapper[4805]: E1216 12:38:15.524200 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:38:29 crc kubenswrapper[4805]: I1216 12:38:29.523116 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:38:30 crc kubenswrapper[4805]: I1216 12:38:30.469571 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"3ba735f8235085fb58f6062dbfcc49f2fae1621b630408a76dc90179ba06995b"} Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.090127 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9sszv"] Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.093508 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.111424 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9sszv"] Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.149226 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9twb\" (UniqueName: \"kubernetes.io/projected/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-kube-api-access-d9twb\") pod \"redhat-operators-9sszv\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.149371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-catalog-content\") pod \"redhat-operators-9sszv\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.149461 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-utilities\") pod \"redhat-operators-9sszv\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.251836 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-catalog-content\") pod \"redhat-operators-9sszv\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.251983 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-utilities\") pod \"redhat-operators-9sszv\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.252094 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9twb\" (UniqueName: \"kubernetes.io/projected/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-kube-api-access-d9twb\") pod \"redhat-operators-9sszv\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.252823 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-catalog-content\") pod \"redhat-operators-9sszv\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.252888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-utilities\") pod \"redhat-operators-9sszv\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.274885 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9twb\" (UniqueName: \"kubernetes.io/projected/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-kube-api-access-d9twb\") pod \"redhat-operators-9sszv\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:05 crc kubenswrapper[4805]: I1216 12:40:05.419741 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:06 crc kubenswrapper[4805]: I1216 12:40:06.316324 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9sszv"] Dec 16 12:40:06 crc kubenswrapper[4805]: I1216 12:40:06.830684 4805 generic.go:334] "Generic (PLEG): container finished" podID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" containerID="1c31c2258695b02af3af7eb104f4cdd842760fbffc408dba1112902ecc076321" exitCode=0 Dec 16 12:40:06 crc kubenswrapper[4805]: I1216 12:40:06.831309 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sszv" event={"ID":"c38130df-4e95-4f43-a69b-e24c7cc9cbaa","Type":"ContainerDied","Data":"1c31c2258695b02af3af7eb104f4cdd842760fbffc408dba1112902ecc076321"} Dec 16 12:40:06 crc kubenswrapper[4805]: I1216 12:40:06.831449 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sszv" event={"ID":"c38130df-4e95-4f43-a69b-e24c7cc9cbaa","Type":"ContainerStarted","Data":"219fc945b5a99771cc8863cff4a6fd7a3197b8edd52543ff1f8255887a14ca2b"} Dec 16 12:40:06 crc kubenswrapper[4805]: I1216 12:40:06.833644 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 12:40:09 crc kubenswrapper[4805]: I1216 12:40:09.872901 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sszv" event={"ID":"c38130df-4e95-4f43-a69b-e24c7cc9cbaa","Type":"ContainerStarted","Data":"57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b"} Dec 16 12:40:13 crc kubenswrapper[4805]: I1216 12:40:13.918097 4805 generic.go:334] "Generic (PLEG): container finished" podID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" containerID="57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b" exitCode=0 Dec 16 12:40:13 crc kubenswrapper[4805]: I1216 12:40:13.918213 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sszv" event={"ID":"c38130df-4e95-4f43-a69b-e24c7cc9cbaa","Type":"ContainerDied","Data":"57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b"} Dec 16 12:40:15 crc kubenswrapper[4805]: I1216 12:40:15.945766 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sszv" event={"ID":"c38130df-4e95-4f43-a69b-e24c7cc9cbaa","Type":"ContainerStarted","Data":"7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da"} Dec 16 12:40:15 crc kubenswrapper[4805]: I1216 12:40:15.975825 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9sszv" podStartSLOduration=2.996930238 podStartE2EDuration="10.975748926s" podCreationTimestamp="2025-12-16 12:40:05 +0000 UTC" firstStartedPulling="2025-12-16 12:40:06.833037099 +0000 UTC m=+2680.551294904" lastFinishedPulling="2025-12-16 12:40:14.811855787 +0000 UTC m=+2688.530113592" observedRunningTime="2025-12-16 12:40:15.965481992 +0000 UTC m=+2689.683739797" watchObservedRunningTime="2025-12-16 12:40:15.975748926 +0000 UTC m=+2689.694006741" Dec 16 12:40:25 crc kubenswrapper[4805]: I1216 12:40:25.420624 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:25 crc kubenswrapper[4805]: I1216 12:40:25.421408 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:25 crc kubenswrapper[4805]: I1216 12:40:25.467740 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:26 crc kubenswrapper[4805]: I1216 12:40:26.084351 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:26 crc kubenswrapper[4805]: I1216 12:40:26.153443 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9sszv"] Dec 16 12:40:28 crc kubenswrapper[4805]: I1216 12:40:28.050938 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9sszv" podUID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" containerName="registry-server" containerID="cri-o://7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da" gracePeriod=2 Dec 16 12:40:28 crc kubenswrapper[4805]: I1216 12:40:28.544584 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:28 crc kubenswrapper[4805]: I1216 12:40:28.632642 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9twb\" (UniqueName: \"kubernetes.io/projected/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-kube-api-access-d9twb\") pod \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " Dec 16 12:40:28 crc kubenswrapper[4805]: I1216 12:40:28.632714 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-catalog-content\") pod \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " Dec 16 12:40:28 crc kubenswrapper[4805]: I1216 12:40:28.632874 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-utilities\") pod \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\" (UID: \"c38130df-4e95-4f43-a69b-e24c7cc9cbaa\") " Dec 16 12:40:28 crc kubenswrapper[4805]: I1216 12:40:28.636722 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-utilities" (OuterVolumeSpecName: "utilities") pod "c38130df-4e95-4f43-a69b-e24c7cc9cbaa" (UID: "c38130df-4e95-4f43-a69b-e24c7cc9cbaa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:40:28 crc kubenswrapper[4805]: I1216 12:40:28.643350 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-kube-api-access-d9twb" (OuterVolumeSpecName: "kube-api-access-d9twb") pod "c38130df-4e95-4f43-a69b-e24c7cc9cbaa" (UID: "c38130df-4e95-4f43-a69b-e24c7cc9cbaa"). InnerVolumeSpecName "kube-api-access-d9twb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:40:28 crc kubenswrapper[4805]: I1216 12:40:28.736261 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:40:28 crc kubenswrapper[4805]: I1216 12:40:28.736304 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9twb\" (UniqueName: \"kubernetes.io/projected/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-kube-api-access-d9twb\") on node \"crc\" DevicePath \"\"" Dec 16 12:40:28 crc kubenswrapper[4805]: I1216 12:40:28.763786 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c38130df-4e95-4f43-a69b-e24c7cc9cbaa" (UID: "c38130df-4e95-4f43-a69b-e24c7cc9cbaa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:40:28 crc kubenswrapper[4805]: I1216 12:40:28.838195 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c38130df-4e95-4f43-a69b-e24c7cc9cbaa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.062059 4805 generic.go:334] "Generic (PLEG): container finished" podID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" containerID="7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da" exitCode=0 Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.062181 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sszv" event={"ID":"c38130df-4e95-4f43-a69b-e24c7cc9cbaa","Type":"ContainerDied","Data":"7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da"} Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.062267 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9sszv" Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.063435 4805 scope.go:117] "RemoveContainer" containerID="7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da" Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.063306 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sszv" event={"ID":"c38130df-4e95-4f43-a69b-e24c7cc9cbaa","Type":"ContainerDied","Data":"219fc945b5a99771cc8863cff4a6fd7a3197b8edd52543ff1f8255887a14ca2b"} Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.089421 4805 scope.go:117] "RemoveContainer" containerID="57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b" Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.121728 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9sszv"] Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.158220 4805 scope.go:117] "RemoveContainer" containerID="1c31c2258695b02af3af7eb104f4cdd842760fbffc408dba1112902ecc076321" Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.162354 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9sszv"] Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.183729 4805 scope.go:117] "RemoveContainer" containerID="7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da" Dec 16 12:40:29 crc kubenswrapper[4805]: E1216 12:40:29.184378 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da\": container with ID starting with 7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da not found: ID does not exist" containerID="7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da" Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.184420 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da"} err="failed to get container status \"7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da\": rpc error: code = NotFound desc = could not find container \"7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da\": container with ID starting with 7dfb4f276beb4f45e58e71eb9a87dd65ba7b29d2b354fd997bc247d40e8c70da not found: ID does not exist" Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.184449 4805 scope.go:117] "RemoveContainer" containerID="57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b" Dec 16 12:40:29 crc kubenswrapper[4805]: E1216 12:40:29.184834 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b\": container with ID starting with 57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b not found: ID does not exist" containerID="57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b" Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.184860 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b"} err="failed to get container status \"57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b\": rpc error: code = NotFound desc = could not find container \"57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b\": container with ID starting with 57d2f597624084af92205e389e4172e193bcaa0649ca4d98fdc1df8f9a10df3b not found: ID does not exist" Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.184879 4805 scope.go:117] "RemoveContainer" containerID="1c31c2258695b02af3af7eb104f4cdd842760fbffc408dba1112902ecc076321" Dec 16 12:40:29 crc kubenswrapper[4805]: E1216 12:40:29.185136 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c31c2258695b02af3af7eb104f4cdd842760fbffc408dba1112902ecc076321\": container with ID starting with 1c31c2258695b02af3af7eb104f4cdd842760fbffc408dba1112902ecc076321 not found: ID does not exist" containerID="1c31c2258695b02af3af7eb104f4cdd842760fbffc408dba1112902ecc076321" Dec 16 12:40:29 crc kubenswrapper[4805]: I1216 12:40:29.185300 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c31c2258695b02af3af7eb104f4cdd842760fbffc408dba1112902ecc076321"} err="failed to get container status \"1c31c2258695b02af3af7eb104f4cdd842760fbffc408dba1112902ecc076321\": rpc error: code = NotFound desc = could not find container \"1c31c2258695b02af3af7eb104f4cdd842760fbffc408dba1112902ecc076321\": container with ID starting with 1c31c2258695b02af3af7eb104f4cdd842760fbffc408dba1112902ecc076321 not found: ID does not exist" Dec 16 12:40:30 crc kubenswrapper[4805]: I1216 12:40:30.534656 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" path="/var/lib/kubelet/pods/c38130df-4e95-4f43-a69b-e24c7cc9cbaa/volumes" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.071262 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.071880 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.224384 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pn9h9"] Dec 16 12:40:57 crc kubenswrapper[4805]: E1216 12:40:57.224969 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" containerName="registry-server" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.225002 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" containerName="registry-server" Dec 16 12:40:57 crc kubenswrapper[4805]: E1216 12:40:57.225038 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" containerName="extract-utilities" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.225050 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" containerName="extract-utilities" Dec 16 12:40:57 crc kubenswrapper[4805]: E1216 12:40:57.225066 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" containerName="extract-content" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.225075 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" containerName="extract-content" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.225356 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38130df-4e95-4f43-a69b-e24c7cc9cbaa" containerName="registry-server" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.227199 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.235732 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn9h9"] Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.338206 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-catalog-content\") pod \"community-operators-pn9h9\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.338312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-utilities\") pod \"community-operators-pn9h9\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.338518 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fltfd\" (UniqueName: \"kubernetes.io/projected/1ea6a834-b4f0-488e-84b0-1fff77a3192b-kube-api-access-fltfd\") pod \"community-operators-pn9h9\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.440822 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-catalog-content\") pod \"community-operators-pn9h9\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.440916 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-utilities\") pod \"community-operators-pn9h9\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.441016 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fltfd\" (UniqueName: \"kubernetes.io/projected/1ea6a834-b4f0-488e-84b0-1fff77a3192b-kube-api-access-fltfd\") pod \"community-operators-pn9h9\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.441493 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-catalog-content\") pod \"community-operators-pn9h9\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.441550 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-utilities\") pod \"community-operators-pn9h9\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.467758 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fltfd\" (UniqueName: \"kubernetes.io/projected/1ea6a834-b4f0-488e-84b0-1fff77a3192b-kube-api-access-fltfd\") pod \"community-operators-pn9h9\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.549857 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:40:57 crc kubenswrapper[4805]: I1216 12:40:57.924920 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn9h9"] Dec 16 12:40:58 crc kubenswrapper[4805]: I1216 12:40:58.353391 4805 generic.go:334] "Generic (PLEG): container finished" podID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" containerID="5bc676f4c276e7d5a1b9b04c7485f938eb6ec06e0dfd9f1976d4c883feba05b4" exitCode=0 Dec 16 12:40:58 crc kubenswrapper[4805]: I1216 12:40:58.353456 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn9h9" event={"ID":"1ea6a834-b4f0-488e-84b0-1fff77a3192b","Type":"ContainerDied","Data":"5bc676f4c276e7d5a1b9b04c7485f938eb6ec06e0dfd9f1976d4c883feba05b4"} Dec 16 12:40:58 crc kubenswrapper[4805]: I1216 12:40:58.353518 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn9h9" event={"ID":"1ea6a834-b4f0-488e-84b0-1fff77a3192b","Type":"ContainerStarted","Data":"9a06e5e0bf5dcb29953a4a58cc0023a2cfe7f27b5731ae6037855171703fb0d0"} Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.018323 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r662d"] Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.021399 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.033919 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r662d"] Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.210436 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-utilities\") pod \"redhat-marketplace-r662d\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.210722 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97w86\" (UniqueName: \"kubernetes.io/projected/7efc5167-820b-4217-b5a4-2636dcf25c71-kube-api-access-97w86\") pod \"redhat-marketplace-r662d\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.211072 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-catalog-content\") pod \"redhat-marketplace-r662d\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.313268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-utilities\") pod \"redhat-marketplace-r662d\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.313317 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97w86\" (UniqueName: \"kubernetes.io/projected/7efc5167-820b-4217-b5a4-2636dcf25c71-kube-api-access-97w86\") pod \"redhat-marketplace-r662d\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.313358 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-catalog-content\") pod \"redhat-marketplace-r662d\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.314173 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-catalog-content\") pod \"redhat-marketplace-r662d\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.314300 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-utilities\") pod \"redhat-marketplace-r662d\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.336107 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97w86\" (UniqueName: \"kubernetes.io/projected/7efc5167-820b-4217-b5a4-2636dcf25c71-kube-api-access-97w86\") pod \"redhat-marketplace-r662d\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.349496 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.380057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn9h9" event={"ID":"1ea6a834-b4f0-488e-84b0-1fff77a3192b","Type":"ContainerStarted","Data":"e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6"} Dec 16 12:41:00 crc kubenswrapper[4805]: I1216 12:41:00.944049 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r662d"] Dec 16 12:41:00 crc kubenswrapper[4805]: W1216 12:41:00.956491 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7efc5167_820b_4217_b5a4_2636dcf25c71.slice/crio-9707108c1a2a50aab5eccfa3bc4d90a7a316f9bb75461e1e8a77c5d9cb5259a7 WatchSource:0}: Error finding container 9707108c1a2a50aab5eccfa3bc4d90a7a316f9bb75461e1e8a77c5d9cb5259a7: Status 404 returned error can't find the container with id 9707108c1a2a50aab5eccfa3bc4d90a7a316f9bb75461e1e8a77c5d9cb5259a7 Dec 16 12:41:01 crc kubenswrapper[4805]: I1216 12:41:01.397448 4805 generic.go:334] "Generic (PLEG): container finished" podID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" containerID="e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6" exitCode=0 Dec 16 12:41:01 crc kubenswrapper[4805]: I1216 12:41:01.397517 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn9h9" event={"ID":"1ea6a834-b4f0-488e-84b0-1fff77a3192b","Type":"ContainerDied","Data":"e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6"} Dec 16 12:41:01 crc kubenswrapper[4805]: I1216 12:41:01.401594 4805 generic.go:334] "Generic (PLEG): container finished" podID="7efc5167-820b-4217-b5a4-2636dcf25c71" containerID="28a0ec2a96034af0ae2d36e1984a3cbf20e46ab4871350597aef0cca3e4bd746" exitCode=0 Dec 16 12:41:01 crc kubenswrapper[4805]: I1216 12:41:01.401645 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r662d" event={"ID":"7efc5167-820b-4217-b5a4-2636dcf25c71","Type":"ContainerDied","Data":"28a0ec2a96034af0ae2d36e1984a3cbf20e46ab4871350597aef0cca3e4bd746"} Dec 16 12:41:01 crc kubenswrapper[4805]: I1216 12:41:01.401675 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r662d" event={"ID":"7efc5167-820b-4217-b5a4-2636dcf25c71","Type":"ContainerStarted","Data":"9707108c1a2a50aab5eccfa3bc4d90a7a316f9bb75461e1e8a77c5d9cb5259a7"} Dec 16 12:41:02 crc kubenswrapper[4805]: I1216 12:41:02.419908 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r662d" event={"ID":"7efc5167-820b-4217-b5a4-2636dcf25c71","Type":"ContainerStarted","Data":"f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90"} Dec 16 12:41:02 crc kubenswrapper[4805]: I1216 12:41:02.428691 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn9h9" event={"ID":"1ea6a834-b4f0-488e-84b0-1fff77a3192b","Type":"ContainerStarted","Data":"61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f"} Dec 16 12:41:02 crc kubenswrapper[4805]: I1216 12:41:02.489193 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pn9h9" podStartSLOduration=1.97590797 podStartE2EDuration="5.489117502s" podCreationTimestamp="2025-12-16 12:40:57 +0000 UTC" firstStartedPulling="2025-12-16 12:40:58.355720179 +0000 UTC m=+2732.073977984" lastFinishedPulling="2025-12-16 12:41:01.868929691 +0000 UTC m=+2735.587187516" observedRunningTime="2025-12-16 12:41:02.468755948 +0000 UTC m=+2736.187013753" watchObservedRunningTime="2025-12-16 12:41:02.489117502 +0000 UTC m=+2736.207375327" Dec 16 12:41:03 crc kubenswrapper[4805]: I1216 12:41:03.441532 4805 generic.go:334] "Generic (PLEG): container finished" podID="7efc5167-820b-4217-b5a4-2636dcf25c71" containerID="f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90" exitCode=0 Dec 16 12:41:03 crc kubenswrapper[4805]: I1216 12:41:03.441812 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r662d" event={"ID":"7efc5167-820b-4217-b5a4-2636dcf25c71","Type":"ContainerDied","Data":"f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90"} Dec 16 12:41:04 crc kubenswrapper[4805]: I1216 12:41:04.454783 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r662d" event={"ID":"7efc5167-820b-4217-b5a4-2636dcf25c71","Type":"ContainerStarted","Data":"41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8"} Dec 16 12:41:04 crc kubenswrapper[4805]: I1216 12:41:04.481449 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r662d" podStartSLOduration=2.706889016 podStartE2EDuration="5.48142166s" podCreationTimestamp="2025-12-16 12:40:59 +0000 UTC" firstStartedPulling="2025-12-16 12:41:01.402953672 +0000 UTC m=+2735.121211477" lastFinishedPulling="2025-12-16 12:41:04.177486316 +0000 UTC m=+2737.895744121" observedRunningTime="2025-12-16 12:41:04.478740893 +0000 UTC m=+2738.196998718" watchObservedRunningTime="2025-12-16 12:41:04.48142166 +0000 UTC m=+2738.199679485" Dec 16 12:41:07 crc kubenswrapper[4805]: I1216 12:41:07.551003 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:41:07 crc kubenswrapper[4805]: I1216 12:41:07.551625 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:41:07 crc kubenswrapper[4805]: I1216 12:41:07.606835 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:41:08 crc kubenswrapper[4805]: I1216 12:41:08.544669 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:41:09 crc kubenswrapper[4805]: I1216 12:41:09.507838 4805 generic.go:334] "Generic (PLEG): container finished" podID="8b5953ad-0a78-4483-9097-2d4de5ad084e" containerID="bdaf79ca6d116f65789de8d4953fc8ea66b4697f683d71d5fbc573144f4c687c" exitCode=0 Dec 16 12:41:09 crc kubenswrapper[4805]: I1216 12:41:09.507908 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" event={"ID":"8b5953ad-0a78-4483-9097-2d4de5ad084e","Type":"ContainerDied","Data":"bdaf79ca6d116f65789de8d4953fc8ea66b4697f683d71d5fbc573144f4c687c"} Dec 16 12:41:09 crc kubenswrapper[4805]: I1216 12:41:09.604715 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pn9h9"] Dec 16 12:41:10 crc kubenswrapper[4805]: I1216 12:41:10.350587 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:10 crc kubenswrapper[4805]: I1216 12:41:10.350654 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:10 crc kubenswrapper[4805]: I1216 12:41:10.401182 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:10 crc kubenswrapper[4805]: I1216 12:41:10.579485 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.059270 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.177691 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-secret-0\") pod \"8b5953ad-0a78-4483-9097-2d4de5ad084e\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.178103 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5ht9\" (UniqueName: \"kubernetes.io/projected/8b5953ad-0a78-4483-9097-2d4de5ad084e-kube-api-access-t5ht9\") pod \"8b5953ad-0a78-4483-9097-2d4de5ad084e\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.178323 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-inventory\") pod \"8b5953ad-0a78-4483-9097-2d4de5ad084e\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.178863 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-ssh-key\") pod \"8b5953ad-0a78-4483-9097-2d4de5ad084e\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.179003 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-combined-ca-bundle\") pod \"8b5953ad-0a78-4483-9097-2d4de5ad084e\" (UID: \"8b5953ad-0a78-4483-9097-2d4de5ad084e\") " Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.184884 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8b5953ad-0a78-4483-9097-2d4de5ad084e" (UID: "8b5953ad-0a78-4483-9097-2d4de5ad084e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.201493 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5953ad-0a78-4483-9097-2d4de5ad084e-kube-api-access-t5ht9" (OuterVolumeSpecName: "kube-api-access-t5ht9") pod "8b5953ad-0a78-4483-9097-2d4de5ad084e" (UID: "8b5953ad-0a78-4483-9097-2d4de5ad084e"). InnerVolumeSpecName "kube-api-access-t5ht9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.217385 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8b5953ad-0a78-4483-9097-2d4de5ad084e" (UID: "8b5953ad-0a78-4483-9097-2d4de5ad084e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.217496 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-inventory" (OuterVolumeSpecName: "inventory") pod "8b5953ad-0a78-4483-9097-2d4de5ad084e" (UID: "8b5953ad-0a78-4483-9097-2d4de5ad084e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.233390 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8b5953ad-0a78-4483-9097-2d4de5ad084e" (UID: "8b5953ad-0a78-4483-9097-2d4de5ad084e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.281234 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.281278 4805 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.281296 4805 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.281309 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5ht9\" (UniqueName: \"kubernetes.io/projected/8b5953ad-0a78-4483-9097-2d4de5ad084e-kube-api-access-t5ht9\") on node \"crc\" DevicePath \"\"" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.281322 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b5953ad-0a78-4483-9097-2d4de5ad084e-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.528856 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.530426 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4ft55" event={"ID":"8b5953ad-0a78-4483-9097-2d4de5ad084e","Type":"ContainerDied","Data":"428e8fbb31d3415f1092cb0c68656d2d90bc95cfcb38cd8c0cd023075b8a6bb2"} Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.530513 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="428e8fbb31d3415f1092cb0c68656d2d90bc95cfcb38cd8c0cd023075b8a6bb2" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.530675 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pn9h9" podUID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" containerName="registry-server" containerID="cri-o://61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f" gracePeriod=2 Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.691377 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q"] Dec 16 12:41:11 crc kubenswrapper[4805]: E1216 12:41:11.691927 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5953ad-0a78-4483-9097-2d4de5ad084e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.691943 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5953ad-0a78-4483-9097-2d4de5ad084e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.692227 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5953ad-0a78-4483-9097-2d4de5ad084e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.693056 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.696885 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.697167 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.697322 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.697552 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.697705 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.698011 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.698215 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.711233 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q"] Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.790858 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.790911 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.790976 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.790997 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.791030 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.791070 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/128a7ec0-e80f-4147-a459-283405d9c838-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.791112 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.791133 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.791178 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmgnl\" (UniqueName: \"kubernetes.io/projected/128a7ec0-e80f-4147-a459-283405d9c838-kube-api-access-pmgnl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.893054 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.893174 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/128a7ec0-e80f-4147-a459-283405d9c838-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.893299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.893353 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.893427 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmgnl\" (UniqueName: \"kubernetes.io/projected/128a7ec0-e80f-4147-a459-283405d9c838-kube-api-access-pmgnl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.893610 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.893673 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.893842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.893878 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.894104 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/128a7ec0-e80f-4147-a459-283405d9c838-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.897237 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.897701 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.897970 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.898790 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.899570 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.904551 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.910908 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:11 crc kubenswrapper[4805]: I1216 12:41:11.913892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmgnl\" (UniqueName: \"kubernetes.io/projected/128a7ec0-e80f-4147-a459-283405d9c838-kube-api-access-pmgnl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwk4q\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.030350 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.033508 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.200860 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-catalog-content\") pod \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.200922 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fltfd\" (UniqueName: \"kubernetes.io/projected/1ea6a834-b4f0-488e-84b0-1fff77a3192b-kube-api-access-fltfd\") pod \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.200973 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-utilities\") pod \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\" (UID: \"1ea6a834-b4f0-488e-84b0-1fff77a3192b\") " Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.203177 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-utilities" (OuterVolumeSpecName: "utilities") pod "1ea6a834-b4f0-488e-84b0-1fff77a3192b" (UID: "1ea6a834-b4f0-488e-84b0-1fff77a3192b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.207773 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea6a834-b4f0-488e-84b0-1fff77a3192b-kube-api-access-fltfd" (OuterVolumeSpecName: "kube-api-access-fltfd") pod "1ea6a834-b4f0-488e-84b0-1fff77a3192b" (UID: "1ea6a834-b4f0-488e-84b0-1fff77a3192b"). InnerVolumeSpecName "kube-api-access-fltfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.276058 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ea6a834-b4f0-488e-84b0-1fff77a3192b" (UID: "1ea6a834-b4f0-488e-84b0-1fff77a3192b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.305904 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.305960 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fltfd\" (UniqueName: \"kubernetes.io/projected/1ea6a834-b4f0-488e-84b0-1fff77a3192b-kube-api-access-fltfd\") on node \"crc\" DevicePath \"\"" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.305975 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ea6a834-b4f0-488e-84b0-1fff77a3192b-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.540946 4805 generic.go:334] "Generic (PLEG): container finished" podID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" containerID="61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f" exitCode=0 Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.541284 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn9h9" event={"ID":"1ea6a834-b4f0-488e-84b0-1fff77a3192b","Type":"ContainerDied","Data":"61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f"} Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.541318 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn9h9" event={"ID":"1ea6a834-b4f0-488e-84b0-1fff77a3192b","Type":"ContainerDied","Data":"9a06e5e0bf5dcb29953a4a58cc0023a2cfe7f27b5731ae6037855171703fb0d0"} Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.541341 4805 scope.go:117] "RemoveContainer" containerID="61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.541481 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn9h9" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.618073 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pn9h9"] Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.662918 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pn9h9"] Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.667012 4805 scope.go:117] "RemoveContainer" containerID="e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.682430 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q"] Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.711628 4805 scope.go:117] "RemoveContainer" containerID="5bc676f4c276e7d5a1b9b04c7485f938eb6ec06e0dfd9f1976d4c883feba05b4" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.739609 4805 scope.go:117] "RemoveContainer" containerID="61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f" Dec 16 12:41:12 crc kubenswrapper[4805]: E1216 12:41:12.747643 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f\": container with ID starting with 61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f not found: ID does not exist" containerID="61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.747733 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f"} err="failed to get container status \"61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f\": rpc error: code = NotFound desc = could not find container \"61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f\": container with ID starting with 61825872f9feaf101be2ab48dd3c76d035d63fb67221d682c4bb2d796ab8cc2f not found: ID does not exist" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.747797 4805 scope.go:117] "RemoveContainer" containerID="e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6" Dec 16 12:41:12 crc kubenswrapper[4805]: E1216 12:41:12.748740 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6\": container with ID starting with e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6 not found: ID does not exist" containerID="e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.748798 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6"} err="failed to get container status \"e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6\": rpc error: code = NotFound desc = could not find container \"e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6\": container with ID starting with e81e4474adc54fb470a75627a7d9295c25718f6c56cd71fc2e484eaa0a266ac6 not found: ID does not exist" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.748827 4805 scope.go:117] "RemoveContainer" containerID="5bc676f4c276e7d5a1b9b04c7485f938eb6ec06e0dfd9f1976d4c883feba05b4" Dec 16 12:41:12 crc kubenswrapper[4805]: E1216 12:41:12.749698 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc676f4c276e7d5a1b9b04c7485f938eb6ec06e0dfd9f1976d4c883feba05b4\": container with ID starting with 5bc676f4c276e7d5a1b9b04c7485f938eb6ec06e0dfd9f1976d4c883feba05b4 not found: ID does not exist" containerID="5bc676f4c276e7d5a1b9b04c7485f938eb6ec06e0dfd9f1976d4c883feba05b4" Dec 16 12:41:12 crc kubenswrapper[4805]: I1216 12:41:12.749735 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc676f4c276e7d5a1b9b04c7485f938eb6ec06e0dfd9f1976d4c883feba05b4"} err="failed to get container status \"5bc676f4c276e7d5a1b9b04c7485f938eb6ec06e0dfd9f1976d4c883feba05b4\": rpc error: code = NotFound desc = could not find container \"5bc676f4c276e7d5a1b9b04c7485f938eb6ec06e0dfd9f1976d4c883feba05b4\": container with ID starting with 5bc676f4c276e7d5a1b9b04c7485f938eb6ec06e0dfd9f1976d4c883feba05b4 not found: ID does not exist" Dec 16 12:41:13 crc kubenswrapper[4805]: I1216 12:41:13.557323 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" event={"ID":"128a7ec0-e80f-4147-a459-283405d9c838","Type":"ContainerStarted","Data":"94441ee530cd9450d503309a136a3efe36e4ab1b93113833d29b5d1fc013a82c"} Dec 16 12:41:13 crc kubenswrapper[4805]: I1216 12:41:13.609525 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r662d"] Dec 16 12:41:13 crc kubenswrapper[4805]: I1216 12:41:13.609962 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r662d" podUID="7efc5167-820b-4217-b5a4-2636dcf25c71" containerName="registry-server" containerID="cri-o://41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8" gracePeriod=2 Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.079634 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.245123 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-catalog-content\") pod \"7efc5167-820b-4217-b5a4-2636dcf25c71\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.245283 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97w86\" (UniqueName: \"kubernetes.io/projected/7efc5167-820b-4217-b5a4-2636dcf25c71-kube-api-access-97w86\") pod \"7efc5167-820b-4217-b5a4-2636dcf25c71\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.245478 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-utilities\") pod \"7efc5167-820b-4217-b5a4-2636dcf25c71\" (UID: \"7efc5167-820b-4217-b5a4-2636dcf25c71\") " Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.246232 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-utilities" (OuterVolumeSpecName: "utilities") pod "7efc5167-820b-4217-b5a4-2636dcf25c71" (UID: "7efc5167-820b-4217-b5a4-2636dcf25c71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.260325 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7efc5167-820b-4217-b5a4-2636dcf25c71-kube-api-access-97w86" (OuterVolumeSpecName: "kube-api-access-97w86") pod "7efc5167-820b-4217-b5a4-2636dcf25c71" (UID: "7efc5167-820b-4217-b5a4-2636dcf25c71"). InnerVolumeSpecName "kube-api-access-97w86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.271213 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7efc5167-820b-4217-b5a4-2636dcf25c71" (UID: "7efc5167-820b-4217-b5a4-2636dcf25c71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.348374 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97w86\" (UniqueName: \"kubernetes.io/projected/7efc5167-820b-4217-b5a4-2636dcf25c71-kube-api-access-97w86\") on node \"crc\" DevicePath \"\"" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.348409 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.348420 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7efc5167-820b-4217-b5a4-2636dcf25c71-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.536839 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" path="/var/lib/kubelet/pods/1ea6a834-b4f0-488e-84b0-1fff77a3192b/volumes" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.571745 4805 generic.go:334] "Generic (PLEG): container finished" podID="7efc5167-820b-4217-b5a4-2636dcf25c71" containerID="41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8" exitCode=0 Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.571808 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r662d" event={"ID":"7efc5167-820b-4217-b5a4-2636dcf25c71","Type":"ContainerDied","Data":"41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8"} Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.571846 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r662d" event={"ID":"7efc5167-820b-4217-b5a4-2636dcf25c71","Type":"ContainerDied","Data":"9707108c1a2a50aab5eccfa3bc4d90a7a316f9bb75461e1e8a77c5d9cb5259a7"} Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.571867 4805 scope.go:117] "RemoveContainer" containerID="41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.572020 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r662d" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.576576 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" event={"ID":"128a7ec0-e80f-4147-a459-283405d9c838","Type":"ContainerStarted","Data":"903da2f900b85c7b030633c96dbd83d9bacc34ce9ece4dfeacda09063fd35171"} Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.600075 4805 scope.go:117] "RemoveContainer" containerID="f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.605314 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r662d"] Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.613968 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r662d"] Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.635326 4805 scope.go:117] "RemoveContainer" containerID="28a0ec2a96034af0ae2d36e1984a3cbf20e46ab4871350597aef0cca3e4bd746" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.647705 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" podStartSLOduration=2.83528996 podStartE2EDuration="3.647684981s" podCreationTimestamp="2025-12-16 12:41:11 +0000 UTC" firstStartedPulling="2025-12-16 12:41:12.711843312 +0000 UTC m=+2746.430101127" lastFinishedPulling="2025-12-16 12:41:13.524238343 +0000 UTC m=+2747.242496148" observedRunningTime="2025-12-16 12:41:14.64033167 +0000 UTC m=+2748.358589485" watchObservedRunningTime="2025-12-16 12:41:14.647684981 +0000 UTC m=+2748.365942796" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.680666 4805 scope.go:117] "RemoveContainer" containerID="41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8" Dec 16 12:41:14 crc kubenswrapper[4805]: E1216 12:41:14.681576 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8\": container with ID starting with 41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8 not found: ID does not exist" containerID="41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.681624 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8"} err="failed to get container status \"41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8\": rpc error: code = NotFound desc = could not find container \"41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8\": container with ID starting with 41547d617989edc4eff991335babba956eed3821890920a79af6347b1d559ab8 not found: ID does not exist" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.681651 4805 scope.go:117] "RemoveContainer" containerID="f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90" Dec 16 12:41:14 crc kubenswrapper[4805]: E1216 12:41:14.682238 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90\": container with ID starting with f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90 not found: ID does not exist" containerID="f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.682279 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90"} err="failed to get container status \"f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90\": rpc error: code = NotFound desc = could not find container \"f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90\": container with ID starting with f6a48720ae25c83e88cf202a2a7287871e882d9ddd28fc0c0af38e4a2999cc90 not found: ID does not exist" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.682320 4805 scope.go:117] "RemoveContainer" containerID="28a0ec2a96034af0ae2d36e1984a3cbf20e46ab4871350597aef0cca3e4bd746" Dec 16 12:41:14 crc kubenswrapper[4805]: E1216 12:41:14.682903 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a0ec2a96034af0ae2d36e1984a3cbf20e46ab4871350597aef0cca3e4bd746\": container with ID starting with 28a0ec2a96034af0ae2d36e1984a3cbf20e46ab4871350597aef0cca3e4bd746 not found: ID does not exist" containerID="28a0ec2a96034af0ae2d36e1984a3cbf20e46ab4871350597aef0cca3e4bd746" Dec 16 12:41:14 crc kubenswrapper[4805]: I1216 12:41:14.682974 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a0ec2a96034af0ae2d36e1984a3cbf20e46ab4871350597aef0cca3e4bd746"} err="failed to get container status \"28a0ec2a96034af0ae2d36e1984a3cbf20e46ab4871350597aef0cca3e4bd746\": rpc error: code = NotFound desc = could not find container \"28a0ec2a96034af0ae2d36e1984a3cbf20e46ab4871350597aef0cca3e4bd746\": container with ID starting with 28a0ec2a96034af0ae2d36e1984a3cbf20e46ab4871350597aef0cca3e4bd746 not found: ID does not exist" Dec 16 12:41:16 crc kubenswrapper[4805]: I1216 12:41:16.533590 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7efc5167-820b-4217-b5a4-2636dcf25c71" path="/var/lib/kubelet/pods/7efc5167-820b-4217-b5a4-2636dcf25c71/volumes" Dec 16 12:41:27 crc kubenswrapper[4805]: I1216 12:41:27.072336 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:41:27 crc kubenswrapper[4805]: I1216 12:41:27.073033 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:41:57 crc kubenswrapper[4805]: I1216 12:41:57.071417 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:41:57 crc kubenswrapper[4805]: I1216 12:41:57.071927 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:41:57 crc kubenswrapper[4805]: I1216 12:41:57.071979 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:41:57 crc kubenswrapper[4805]: I1216 12:41:57.072844 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ba735f8235085fb58f6062dbfcc49f2fae1621b630408a76dc90179ba06995b"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:41:57 crc kubenswrapper[4805]: I1216 12:41:57.072894 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://3ba735f8235085fb58f6062dbfcc49f2fae1621b630408a76dc90179ba06995b" gracePeriod=600 Dec 16 12:41:57 crc kubenswrapper[4805]: I1216 12:41:57.987308 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="3ba735f8235085fb58f6062dbfcc49f2fae1621b630408a76dc90179ba06995b" exitCode=0 Dec 16 12:41:57 crc kubenswrapper[4805]: I1216 12:41:57.987361 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"3ba735f8235085fb58f6062dbfcc49f2fae1621b630408a76dc90179ba06995b"} Dec 16 12:41:57 crc kubenswrapper[4805]: I1216 12:41:57.987664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb"} Dec 16 12:41:57 crc kubenswrapper[4805]: I1216 12:41:57.987711 4805 scope.go:117] "RemoveContainer" containerID="b07dd515e27de1811aa2e6d62b051dcb5010b8192d5c511ad9d62837def8127b" Dec 16 12:42:59 crc kubenswrapper[4805]: I1216 12:42:59.756370 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b444986fd-5mnhx" podUID="b885ab69-dc83-439c-9040-09fc3d238093" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:42:59 crc kubenswrapper[4805]: I1216 12:42:59.853730 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-cc776f956-smg8x" podUID="9b3aad50-49b1-43c0-84c9-15368e69abae" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 12:43:57 crc kubenswrapper[4805]: I1216 12:43:57.071317 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:43:57 crc kubenswrapper[4805]: I1216 12:43:57.071848 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:44:27 crc kubenswrapper[4805]: I1216 12:44:27.071419 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:44:27 crc kubenswrapper[4805]: I1216 12:44:27.071997 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:44:57 crc kubenswrapper[4805]: I1216 12:44:57.071508 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:44:57 crc kubenswrapper[4805]: I1216 12:44:57.072244 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:44:57 crc kubenswrapper[4805]: I1216 12:44:57.072336 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:44:57 crc kubenswrapper[4805]: I1216 12:44:57.073732 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:44:57 crc kubenswrapper[4805]: I1216 12:44:57.073871 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" gracePeriod=600 Dec 16 12:44:58 crc kubenswrapper[4805]: I1216 12:44:58.598656 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" exitCode=0 Dec 16 12:44:58 crc kubenswrapper[4805]: I1216 12:44:58.598866 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb"} Dec 16 12:44:58 crc kubenswrapper[4805]: I1216 12:44:58.599882 4805 scope.go:117] "RemoveContainer" containerID="3ba735f8235085fb58f6062dbfcc49f2fae1621b630408a76dc90179ba06995b" Dec 16 12:44:58 crc kubenswrapper[4805]: E1216 12:44:58.895139 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:44:59 crc kubenswrapper[4805]: I1216 12:44:59.612095 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:44:59 crc kubenswrapper[4805]: E1216 12:44:59.613215 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.149856 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd"] Dec 16 12:45:00 crc kubenswrapper[4805]: E1216 12:45:00.150654 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efc5167-820b-4217-b5a4-2636dcf25c71" containerName="registry-server" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.150682 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efc5167-820b-4217-b5a4-2636dcf25c71" containerName="registry-server" Dec 16 12:45:00 crc kubenswrapper[4805]: E1216 12:45:00.150703 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" containerName="extract-content" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.150712 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" containerName="extract-content" Dec 16 12:45:00 crc kubenswrapper[4805]: E1216 12:45:00.150732 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" containerName="extract-utilities" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.150742 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" containerName="extract-utilities" Dec 16 12:45:00 crc kubenswrapper[4805]: E1216 12:45:00.150771 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efc5167-820b-4217-b5a4-2636dcf25c71" containerName="extract-content" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.150780 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efc5167-820b-4217-b5a4-2636dcf25c71" containerName="extract-content" Dec 16 12:45:00 crc kubenswrapper[4805]: E1216 12:45:00.150796 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" containerName="registry-server" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.150804 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" containerName="registry-server" Dec 16 12:45:00 crc kubenswrapper[4805]: E1216 12:45:00.150825 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efc5167-820b-4217-b5a4-2636dcf25c71" containerName="extract-utilities" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.150832 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efc5167-820b-4217-b5a4-2636dcf25c71" containerName="extract-utilities" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.151078 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea6a834-b4f0-488e-84b0-1fff77a3192b" containerName="registry-server" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.151098 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efc5167-820b-4217-b5a4-2636dcf25c71" containerName="registry-server" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.151827 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.155481 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.164691 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.172019 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd"] Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.180513 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqbc4\" (UniqueName: \"kubernetes.io/projected/37553067-4bc6-46a6-bd7d-072105c8f46b-kube-api-access-nqbc4\") pod \"collect-profiles-29431485-lqgtd\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.180716 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37553067-4bc6-46a6-bd7d-072105c8f46b-secret-volume\") pod \"collect-profiles-29431485-lqgtd\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.180820 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37553067-4bc6-46a6-bd7d-072105c8f46b-config-volume\") pod \"collect-profiles-29431485-lqgtd\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.282092 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37553067-4bc6-46a6-bd7d-072105c8f46b-secret-volume\") pod \"collect-profiles-29431485-lqgtd\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.282481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37553067-4bc6-46a6-bd7d-072105c8f46b-config-volume\") pod \"collect-profiles-29431485-lqgtd\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.282723 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqbc4\" (UniqueName: \"kubernetes.io/projected/37553067-4bc6-46a6-bd7d-072105c8f46b-kube-api-access-nqbc4\") pod \"collect-profiles-29431485-lqgtd\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.283838 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37553067-4bc6-46a6-bd7d-072105c8f46b-config-volume\") pod \"collect-profiles-29431485-lqgtd\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.294913 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37553067-4bc6-46a6-bd7d-072105c8f46b-secret-volume\") pod \"collect-profiles-29431485-lqgtd\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.302405 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqbc4\" (UniqueName: \"kubernetes.io/projected/37553067-4bc6-46a6-bd7d-072105c8f46b-kube-api-access-nqbc4\") pod \"collect-profiles-29431485-lqgtd\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.490288 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:00 crc kubenswrapper[4805]: I1216 12:45:00.943797 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd"] Dec 16 12:45:01 crc kubenswrapper[4805]: I1216 12:45:01.630325 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" event={"ID":"37553067-4bc6-46a6-bd7d-072105c8f46b","Type":"ContainerStarted","Data":"0653b112ddc181dc633af63aa7c533a5ec506d9ea5cf1af2feefbc68ae2ef3eb"} Dec 16 12:45:03 crc kubenswrapper[4805]: I1216 12:45:03.656093 4805 generic.go:334] "Generic (PLEG): container finished" podID="128a7ec0-e80f-4147-a459-283405d9c838" containerID="903da2f900b85c7b030633c96dbd83d9bacc34ce9ece4dfeacda09063fd35171" exitCode=0 Dec 16 12:45:03 crc kubenswrapper[4805]: I1216 12:45:03.656760 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" event={"ID":"128a7ec0-e80f-4147-a459-283405d9c838","Type":"ContainerDied","Data":"903da2f900b85c7b030633c96dbd83d9bacc34ce9ece4dfeacda09063fd35171"} Dec 16 12:45:03 crc kubenswrapper[4805]: I1216 12:45:03.661702 4805 generic.go:334] "Generic (PLEG): container finished" podID="37553067-4bc6-46a6-bd7d-072105c8f46b" containerID="81bd240fa2ccb6b0b674fdb18147cda63486cbe640926fa477cbfc294fc8684e" exitCode=0 Dec 16 12:45:03 crc kubenswrapper[4805]: I1216 12:45:03.661855 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" event={"ID":"37553067-4bc6-46a6-bd7d-072105c8f46b","Type":"ContainerDied","Data":"81bd240fa2ccb6b0b674fdb18147cda63486cbe640926fa477cbfc294fc8684e"} Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.082065 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.178730 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqbc4\" (UniqueName: \"kubernetes.io/projected/37553067-4bc6-46a6-bd7d-072105c8f46b-kube-api-access-nqbc4\") pod \"37553067-4bc6-46a6-bd7d-072105c8f46b\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.179165 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37553067-4bc6-46a6-bd7d-072105c8f46b-secret-volume\") pod \"37553067-4bc6-46a6-bd7d-072105c8f46b\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.179289 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37553067-4bc6-46a6-bd7d-072105c8f46b-config-volume\") pod \"37553067-4bc6-46a6-bd7d-072105c8f46b\" (UID: \"37553067-4bc6-46a6-bd7d-072105c8f46b\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.179955 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37553067-4bc6-46a6-bd7d-072105c8f46b-config-volume" (OuterVolumeSpecName: "config-volume") pod "37553067-4bc6-46a6-bd7d-072105c8f46b" (UID: "37553067-4bc6-46a6-bd7d-072105c8f46b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.184657 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37553067-4bc6-46a6-bd7d-072105c8f46b-kube-api-access-nqbc4" (OuterVolumeSpecName: "kube-api-access-nqbc4") pod "37553067-4bc6-46a6-bd7d-072105c8f46b" (UID: "37553067-4bc6-46a6-bd7d-072105c8f46b"). InnerVolumeSpecName "kube-api-access-nqbc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.185060 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37553067-4bc6-46a6-bd7d-072105c8f46b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "37553067-4bc6-46a6-bd7d-072105c8f46b" (UID: "37553067-4bc6-46a6-bd7d-072105c8f46b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.235660 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.281958 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqbc4\" (UniqueName: \"kubernetes.io/projected/37553067-4bc6-46a6-bd7d-072105c8f46b-kube-api-access-nqbc4\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.281988 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37553067-4bc6-46a6-bd7d-072105c8f46b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.281998 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37553067-4bc6-46a6-bd7d-072105c8f46b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.383733 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-0\") pod \"128a7ec0-e80f-4147-a459-283405d9c838\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.384073 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-1\") pod \"128a7ec0-e80f-4147-a459-283405d9c838\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.384322 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-combined-ca-bundle\") pod \"128a7ec0-e80f-4147-a459-283405d9c838\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.384425 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmgnl\" (UniqueName: \"kubernetes.io/projected/128a7ec0-e80f-4147-a459-283405d9c838-kube-api-access-pmgnl\") pod \"128a7ec0-e80f-4147-a459-283405d9c838\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.384622 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-0\") pod \"128a7ec0-e80f-4147-a459-283405d9c838\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.384781 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-ssh-key\") pod \"128a7ec0-e80f-4147-a459-283405d9c838\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.384916 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-inventory\") pod \"128a7ec0-e80f-4147-a459-283405d9c838\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.385061 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/128a7ec0-e80f-4147-a459-283405d9c838-nova-extra-config-0\") pod \"128a7ec0-e80f-4147-a459-283405d9c838\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.385224 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-1\") pod \"128a7ec0-e80f-4147-a459-283405d9c838\" (UID: \"128a7ec0-e80f-4147-a459-283405d9c838\") " Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.387855 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "128a7ec0-e80f-4147-a459-283405d9c838" (UID: "128a7ec0-e80f-4147-a459-283405d9c838"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.389032 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128a7ec0-e80f-4147-a459-283405d9c838-kube-api-access-pmgnl" (OuterVolumeSpecName: "kube-api-access-pmgnl") pod "128a7ec0-e80f-4147-a459-283405d9c838" (UID: "128a7ec0-e80f-4147-a459-283405d9c838"). InnerVolumeSpecName "kube-api-access-pmgnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.408588 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/128a7ec0-e80f-4147-a459-283405d9c838-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "128a7ec0-e80f-4147-a459-283405d9c838" (UID: "128a7ec0-e80f-4147-a459-283405d9c838"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.412183 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "128a7ec0-e80f-4147-a459-283405d9c838" (UID: "128a7ec0-e80f-4147-a459-283405d9c838"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.414627 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-inventory" (OuterVolumeSpecName: "inventory") pod "128a7ec0-e80f-4147-a459-283405d9c838" (UID: "128a7ec0-e80f-4147-a459-283405d9c838"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.418207 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "128a7ec0-e80f-4147-a459-283405d9c838" (UID: "128a7ec0-e80f-4147-a459-283405d9c838"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.419866 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "128a7ec0-e80f-4147-a459-283405d9c838" (UID: "128a7ec0-e80f-4147-a459-283405d9c838"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.421213 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "128a7ec0-e80f-4147-a459-283405d9c838" (UID: "128a7ec0-e80f-4147-a459-283405d9c838"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.422163 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "128a7ec0-e80f-4147-a459-283405d9c838" (UID: "128a7ec0-e80f-4147-a459-283405d9c838"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.488273 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.488326 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.488342 4805 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/128a7ec0-e80f-4147-a459-283405d9c838-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.488355 4805 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.488366 4805 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.488377 4805 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.488387 4805 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.488399 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmgnl\" (UniqueName: \"kubernetes.io/projected/128a7ec0-e80f-4147-a459-283405d9c838-kube-api-access-pmgnl\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.488411 4805 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/128a7ec0-e80f-4147-a459-283405d9c838-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.688903 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" event={"ID":"128a7ec0-e80f-4147-a459-283405d9c838","Type":"ContainerDied","Data":"94441ee530cd9450d503309a136a3efe36e4ab1b93113833d29b5d1fc013a82c"} Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.689027 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94441ee530cd9450d503309a136a3efe36e4ab1b93113833d29b5d1fc013a82c" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.689090 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwk4q" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.691966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" event={"ID":"37553067-4bc6-46a6-bd7d-072105c8f46b","Type":"ContainerDied","Data":"0653b112ddc181dc633af63aa7c533a5ec506d9ea5cf1af2feefbc68ae2ef3eb"} Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.692001 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0653b112ddc181dc633af63aa7c533a5ec506d9ea5cf1af2feefbc68ae2ef3eb" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.692029 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.785548 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6"] Dec 16 12:45:05 crc kubenswrapper[4805]: E1216 12:45:05.786665 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37553067-4bc6-46a6-bd7d-072105c8f46b" containerName="collect-profiles" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.786685 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="37553067-4bc6-46a6-bd7d-072105c8f46b" containerName="collect-profiles" Dec 16 12:45:05 crc kubenswrapper[4805]: E1216 12:45:05.786699 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128a7ec0-e80f-4147-a459-283405d9c838" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.786706 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="128a7ec0-e80f-4147-a459-283405d9c838" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.786907 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="37553067-4bc6-46a6-bd7d-072105c8f46b" containerName="collect-profiles" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.786930 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="128a7ec0-e80f-4147-a459-283405d9c838" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.787564 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.789691 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jqms2" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.790416 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.790838 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.790864 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.798556 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.808910 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6"] Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.896645 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.896711 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkftr\" (UniqueName: \"kubernetes.io/projected/e4b7d191-e86d-4386-935b-e3ce28794d6d-kube-api-access-qkftr\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.896742 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.896777 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.896804 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.896833 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.896908 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.998970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.999016 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkftr\" (UniqueName: \"kubernetes.io/projected/e4b7d191-e86d-4386-935b-e3ce28794d6d-kube-api-access-qkftr\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.999063 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.999091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.999118 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.999160 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:05 crc kubenswrapper[4805]: I1216 12:45:05.999241 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.002686 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.002811 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.003231 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.003249 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.006829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.016750 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.021509 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkftr\" (UniqueName: \"kubernetes.io/projected/e4b7d191-e86d-4386-935b-e3ce28794d6d-kube-api-access-qkftr\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.120765 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.165771 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2"] Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.197334 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs"] Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.202210 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw"] Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.217767 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-mc9g2"] Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.233262 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-vndbs"] Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.246015 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431440-2d8fw"] Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.546386 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25299af2-40e6-49f4-ac1c-48c0fde0882c" path="/var/lib/kubelet/pods/25299af2-40e6-49f4-ac1c-48c0fde0882c/volumes" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.606249 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="369f8d4d-bdd3-4d02-8869-93f0e5c6593f" path="/var/lib/kubelet/pods/369f8d4d-bdd3-4d02-8869-93f0e5c6593f/volumes" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.610480 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7f164bf-f966-44c7-82d2-1841b4a6cffe" path="/var/lib/kubelet/pods/b7f164bf-f966-44c7-82d2-1841b4a6cffe/volumes" Dec 16 12:45:06 crc kubenswrapper[4805]: I1216 12:45:06.733548 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6"] Dec 16 12:45:07 crc kubenswrapper[4805]: I1216 12:45:07.710776 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" event={"ID":"e4b7d191-e86d-4386-935b-e3ce28794d6d","Type":"ContainerStarted","Data":"9ae7d9b890ed625a0ff925f2277588a54af24fd2ed8ba81a2c8f4c49da6c2d85"} Dec 16 12:45:10 crc kubenswrapper[4805]: I1216 12:45:10.744972 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" event={"ID":"e4b7d191-e86d-4386-935b-e3ce28794d6d","Type":"ContainerStarted","Data":"7714901e7a558e3b750b12943e0de6dae78918c82cd051ea919fdc52c99b237a"} Dec 16 12:45:10 crc kubenswrapper[4805]: I1216 12:45:10.770682 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" podStartSLOduration=2.098973114 podStartE2EDuration="5.770659896s" podCreationTimestamp="2025-12-16 12:45:05 +0000 UTC" firstStartedPulling="2025-12-16 12:45:06.737418416 +0000 UTC m=+2980.455676231" lastFinishedPulling="2025-12-16 12:45:10.409105198 +0000 UTC m=+2984.127363013" observedRunningTime="2025-12-16 12:45:10.760911267 +0000 UTC m=+2984.479169082" watchObservedRunningTime="2025-12-16 12:45:10.770659896 +0000 UTC m=+2984.488917721" Dec 16 12:45:13 crc kubenswrapper[4805]: I1216 12:45:13.523985 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:45:13 crc kubenswrapper[4805]: E1216 12:45:13.525074 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:45:26 crc kubenswrapper[4805]: I1216 12:45:26.536267 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:45:26 crc kubenswrapper[4805]: E1216 12:45:26.537063 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:45:38 crc kubenswrapper[4805]: I1216 12:45:38.526205 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:45:38 crc kubenswrapper[4805]: E1216 12:45:38.529663 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.522737 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:45:50 crc kubenswrapper[4805]: E1216 12:45:50.523652 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.711547 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2jcx"] Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.740003 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2jcx"] Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.740186 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.810734 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-catalog-content\") pod \"certified-operators-d2jcx\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.810956 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-utilities\") pod \"certified-operators-d2jcx\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.811018 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4f4s\" (UniqueName: \"kubernetes.io/projected/8401d6ab-3939-43ff-811f-99f1b6a17003-kube-api-access-q4f4s\") pod \"certified-operators-d2jcx\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.913314 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-utilities\") pod \"certified-operators-d2jcx\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.913421 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4f4s\" (UniqueName: \"kubernetes.io/projected/8401d6ab-3939-43ff-811f-99f1b6a17003-kube-api-access-q4f4s\") pod \"certified-operators-d2jcx\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.913472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-catalog-content\") pod \"certified-operators-d2jcx\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.914053 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-catalog-content\") pod \"certified-operators-d2jcx\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.915187 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-utilities\") pod \"certified-operators-d2jcx\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:45:50 crc kubenswrapper[4805]: I1216 12:45:50.954027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4f4s\" (UniqueName: \"kubernetes.io/projected/8401d6ab-3939-43ff-811f-99f1b6a17003-kube-api-access-q4f4s\") pod \"certified-operators-d2jcx\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:45:51 crc kubenswrapper[4805]: I1216 12:45:51.066057 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:45:51 crc kubenswrapper[4805]: I1216 12:45:51.654657 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2jcx"] Dec 16 12:45:52 crc kubenswrapper[4805]: I1216 12:45:52.134402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jcx" event={"ID":"8401d6ab-3939-43ff-811f-99f1b6a17003","Type":"ContainerStarted","Data":"69f414e1878e70c4ba134956c420d45243e0446b296157982305c73d91b1a767"} Dec 16 12:45:53 crc kubenswrapper[4805]: I1216 12:45:53.157253 4805 generic.go:334] "Generic (PLEG): container finished" podID="8401d6ab-3939-43ff-811f-99f1b6a17003" containerID="837a69c2bb92e5fbdf6392e925f18291c388e1969b1b482680f31ea7059c337b" exitCode=0 Dec 16 12:45:53 crc kubenswrapper[4805]: I1216 12:45:53.157301 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jcx" event={"ID":"8401d6ab-3939-43ff-811f-99f1b6a17003","Type":"ContainerDied","Data":"837a69c2bb92e5fbdf6392e925f18291c388e1969b1b482680f31ea7059c337b"} Dec 16 12:45:53 crc kubenswrapper[4805]: I1216 12:45:53.159668 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 12:45:54 crc kubenswrapper[4805]: I1216 12:45:54.170787 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jcx" event={"ID":"8401d6ab-3939-43ff-811f-99f1b6a17003","Type":"ContainerStarted","Data":"5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a"} Dec 16 12:45:56 crc kubenswrapper[4805]: I1216 12:45:56.188389 4805 generic.go:334] "Generic (PLEG): container finished" podID="8401d6ab-3939-43ff-811f-99f1b6a17003" containerID="5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a" exitCode=0 Dec 16 12:45:56 crc kubenswrapper[4805]: I1216 12:45:56.188448 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jcx" event={"ID":"8401d6ab-3939-43ff-811f-99f1b6a17003","Type":"ContainerDied","Data":"5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a"} Dec 16 12:45:57 crc kubenswrapper[4805]: I1216 12:45:57.199842 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jcx" event={"ID":"8401d6ab-3939-43ff-811f-99f1b6a17003","Type":"ContainerStarted","Data":"9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce"} Dec 16 12:45:57 crc kubenswrapper[4805]: I1216 12:45:57.230637 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2jcx" podStartSLOduration=3.764079121 podStartE2EDuration="7.23061236s" podCreationTimestamp="2025-12-16 12:45:50 +0000 UTC" firstStartedPulling="2025-12-16 12:45:53.159369327 +0000 UTC m=+3026.877627132" lastFinishedPulling="2025-12-16 12:45:56.625902566 +0000 UTC m=+3030.344160371" observedRunningTime="2025-12-16 12:45:57.225528685 +0000 UTC m=+3030.943786490" watchObservedRunningTime="2025-12-16 12:45:57.23061236 +0000 UTC m=+3030.948870175" Dec 16 12:46:00 crc kubenswrapper[4805]: I1216 12:46:00.025376 4805 scope.go:117] "RemoveContainer" containerID="b30755934546dd9177672f2c5282aa2bd9ae99937412d7cfcd7485c7cc81f73c" Dec 16 12:46:00 crc kubenswrapper[4805]: I1216 12:46:00.076759 4805 scope.go:117] "RemoveContainer" containerID="903d00db4fe6781ab447ba234544ebe1e7e39550153631beddaa3fa6d3cb3322" Dec 16 12:46:00 crc kubenswrapper[4805]: I1216 12:46:00.121008 4805 scope.go:117] "RemoveContainer" containerID="56adc8eea5544632c07d921ee409d199225698603e22e1516bae4e8cc57ea445" Dec 16 12:46:01 crc kubenswrapper[4805]: I1216 12:46:01.067455 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:46:01 crc kubenswrapper[4805]: I1216 12:46:01.067834 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:46:01 crc kubenswrapper[4805]: I1216 12:46:01.117104 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:46:01 crc kubenswrapper[4805]: I1216 12:46:01.286757 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:46:01 crc kubenswrapper[4805]: I1216 12:46:01.351519 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2jcx"] Dec 16 12:46:02 crc kubenswrapper[4805]: I1216 12:46:02.525240 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:46:02 crc kubenswrapper[4805]: E1216 12:46:02.525517 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:46:03 crc kubenswrapper[4805]: I1216 12:46:03.262360 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d2jcx" podUID="8401d6ab-3939-43ff-811f-99f1b6a17003" containerName="registry-server" containerID="cri-o://9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce" gracePeriod=2 Dec 16 12:46:03 crc kubenswrapper[4805]: I1216 12:46:03.739757 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:46:03 crc kubenswrapper[4805]: I1216 12:46:03.913766 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4f4s\" (UniqueName: \"kubernetes.io/projected/8401d6ab-3939-43ff-811f-99f1b6a17003-kube-api-access-q4f4s\") pod \"8401d6ab-3939-43ff-811f-99f1b6a17003\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " Dec 16 12:46:03 crc kubenswrapper[4805]: I1216 12:46:03.914068 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-catalog-content\") pod \"8401d6ab-3939-43ff-811f-99f1b6a17003\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " Dec 16 12:46:03 crc kubenswrapper[4805]: I1216 12:46:03.914089 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-utilities\") pod \"8401d6ab-3939-43ff-811f-99f1b6a17003\" (UID: \"8401d6ab-3939-43ff-811f-99f1b6a17003\") " Dec 16 12:46:03 crc kubenswrapper[4805]: I1216 12:46:03.915713 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-utilities" (OuterVolumeSpecName: "utilities") pod "8401d6ab-3939-43ff-811f-99f1b6a17003" (UID: "8401d6ab-3939-43ff-811f-99f1b6a17003"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:46:03 crc kubenswrapper[4805]: I1216 12:46:03.926544 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8401d6ab-3939-43ff-811f-99f1b6a17003-kube-api-access-q4f4s" (OuterVolumeSpecName: "kube-api-access-q4f4s") pod "8401d6ab-3939-43ff-811f-99f1b6a17003" (UID: "8401d6ab-3939-43ff-811f-99f1b6a17003"). InnerVolumeSpecName "kube-api-access-q4f4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:46:03 crc kubenswrapper[4805]: I1216 12:46:03.985815 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8401d6ab-3939-43ff-811f-99f1b6a17003" (UID: "8401d6ab-3939-43ff-811f-99f1b6a17003"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.016357 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.016398 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8401d6ab-3939-43ff-811f-99f1b6a17003-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.016412 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4f4s\" (UniqueName: \"kubernetes.io/projected/8401d6ab-3939-43ff-811f-99f1b6a17003-kube-api-access-q4f4s\") on node \"crc\" DevicePath \"\"" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.272738 4805 generic.go:334] "Generic (PLEG): container finished" podID="8401d6ab-3939-43ff-811f-99f1b6a17003" containerID="9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce" exitCode=0 Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.272778 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jcx" event={"ID":"8401d6ab-3939-43ff-811f-99f1b6a17003","Type":"ContainerDied","Data":"9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce"} Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.272803 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jcx" event={"ID":"8401d6ab-3939-43ff-811f-99f1b6a17003","Type":"ContainerDied","Data":"69f414e1878e70c4ba134956c420d45243e0446b296157982305c73d91b1a767"} Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.272821 4805 scope.go:117] "RemoveContainer" containerID="9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.272831 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2jcx" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.307271 4805 scope.go:117] "RemoveContainer" containerID="5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.321116 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2jcx"] Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.335507 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d2jcx"] Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.349909 4805 scope.go:117] "RemoveContainer" containerID="837a69c2bb92e5fbdf6392e925f18291c388e1969b1b482680f31ea7059c337b" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.385356 4805 scope.go:117] "RemoveContainer" containerID="9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce" Dec 16 12:46:04 crc kubenswrapper[4805]: E1216 12:46:04.385796 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce\": container with ID starting with 9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce not found: ID does not exist" containerID="9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.385825 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce"} err="failed to get container status \"9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce\": rpc error: code = NotFound desc = could not find container \"9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce\": container with ID starting with 9faed8f44461cf0f33c6e3a651b7677f4b4c38caadef849a6c18e7b3ca5da7ce not found: ID does not exist" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.385846 4805 scope.go:117] "RemoveContainer" containerID="5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a" Dec 16 12:46:04 crc kubenswrapper[4805]: E1216 12:46:04.386152 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a\": container with ID starting with 5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a not found: ID does not exist" containerID="5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.386184 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a"} err="failed to get container status \"5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a\": rpc error: code = NotFound desc = could not find container \"5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a\": container with ID starting with 5c86dc4eeee104b7e055c667dba33358abd0e4e001c138beb669e0367f3a677a not found: ID does not exist" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.386200 4805 scope.go:117] "RemoveContainer" containerID="837a69c2bb92e5fbdf6392e925f18291c388e1969b1b482680f31ea7059c337b" Dec 16 12:46:04 crc kubenswrapper[4805]: E1216 12:46:04.386548 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"837a69c2bb92e5fbdf6392e925f18291c388e1969b1b482680f31ea7059c337b\": container with ID starting with 837a69c2bb92e5fbdf6392e925f18291c388e1969b1b482680f31ea7059c337b not found: ID does not exist" containerID="837a69c2bb92e5fbdf6392e925f18291c388e1969b1b482680f31ea7059c337b" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.386570 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837a69c2bb92e5fbdf6392e925f18291c388e1969b1b482680f31ea7059c337b"} err="failed to get container status \"837a69c2bb92e5fbdf6392e925f18291c388e1969b1b482680f31ea7059c337b\": rpc error: code = NotFound desc = could not find container \"837a69c2bb92e5fbdf6392e925f18291c388e1969b1b482680f31ea7059c337b\": container with ID starting with 837a69c2bb92e5fbdf6392e925f18291c388e1969b1b482680f31ea7059c337b not found: ID does not exist" Dec 16 12:46:04 crc kubenswrapper[4805]: I1216 12:46:04.536807 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8401d6ab-3939-43ff-811f-99f1b6a17003" path="/var/lib/kubelet/pods/8401d6ab-3939-43ff-811f-99f1b6a17003/volumes" Dec 16 12:46:17 crc kubenswrapper[4805]: I1216 12:46:17.522936 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:46:17 crc kubenswrapper[4805]: E1216 12:46:17.523936 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:46:31 crc kubenswrapper[4805]: I1216 12:46:31.522552 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:46:31 crc kubenswrapper[4805]: E1216 12:46:31.523402 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:46:44 crc kubenswrapper[4805]: I1216 12:46:44.522683 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:46:44 crc kubenswrapper[4805]: E1216 12:46:44.523590 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:46:59 crc kubenswrapper[4805]: I1216 12:46:59.522705 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:46:59 crc kubenswrapper[4805]: E1216 12:46:59.523571 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:47:12 crc kubenswrapper[4805]: I1216 12:47:12.522488 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:47:12 crc kubenswrapper[4805]: E1216 12:47:12.523291 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:47:25 crc kubenswrapper[4805]: I1216 12:47:25.522474 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:47:25 crc kubenswrapper[4805]: E1216 12:47:25.523330 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:47:38 crc kubenswrapper[4805]: I1216 12:47:38.523563 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:47:38 crc kubenswrapper[4805]: E1216 12:47:38.524243 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:47:51 crc kubenswrapper[4805]: I1216 12:47:51.522781 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:47:51 crc kubenswrapper[4805]: E1216 12:47:51.523628 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:48:03 crc kubenswrapper[4805]: I1216 12:48:03.523201 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:48:03 crc kubenswrapper[4805]: E1216 12:48:03.524643 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:48:14 crc kubenswrapper[4805]: I1216 12:48:14.523882 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:48:14 crc kubenswrapper[4805]: E1216 12:48:14.525130 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:48:27 crc kubenswrapper[4805]: I1216 12:48:27.601660 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:48:27 crc kubenswrapper[4805]: E1216 12:48:27.602537 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:48:42 crc kubenswrapper[4805]: I1216 12:48:42.523395 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:48:42 crc kubenswrapper[4805]: E1216 12:48:42.524228 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:48:53 crc kubenswrapper[4805]: I1216 12:48:53.523208 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:48:53 crc kubenswrapper[4805]: E1216 12:48:53.524092 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:48:54 crc kubenswrapper[4805]: I1216 12:48:54.991666 4805 generic.go:334] "Generic (PLEG): container finished" podID="e4b7d191-e86d-4386-935b-e3ce28794d6d" containerID="7714901e7a558e3b750b12943e0de6dae78918c82cd051ea919fdc52c99b237a" exitCode=0 Dec 16 12:48:54 crc kubenswrapper[4805]: I1216 12:48:54.991785 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" event={"ID":"e4b7d191-e86d-4386-935b-e3ce28794d6d","Type":"ContainerDied","Data":"7714901e7a558e3b750b12943e0de6dae78918c82cd051ea919fdc52c99b237a"} Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.432483 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.519274 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-2\") pod \"e4b7d191-e86d-4386-935b-e3ce28794d6d\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.519630 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-1\") pod \"e4b7d191-e86d-4386-935b-e3ce28794d6d\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.519755 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkftr\" (UniqueName: \"kubernetes.io/projected/e4b7d191-e86d-4386-935b-e3ce28794d6d-kube-api-access-qkftr\") pod \"e4b7d191-e86d-4386-935b-e3ce28794d6d\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.519871 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-0\") pod \"e4b7d191-e86d-4386-935b-e3ce28794d6d\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.520014 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-telemetry-combined-ca-bundle\") pod \"e4b7d191-e86d-4386-935b-e3ce28794d6d\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.520046 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-inventory\") pod \"e4b7d191-e86d-4386-935b-e3ce28794d6d\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.520086 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ssh-key\") pod \"e4b7d191-e86d-4386-935b-e3ce28794d6d\" (UID: \"e4b7d191-e86d-4386-935b-e3ce28794d6d\") " Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.525449 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b7d191-e86d-4386-935b-e3ce28794d6d-kube-api-access-qkftr" (OuterVolumeSpecName: "kube-api-access-qkftr") pod "e4b7d191-e86d-4386-935b-e3ce28794d6d" (UID: "e4b7d191-e86d-4386-935b-e3ce28794d6d"). InnerVolumeSpecName "kube-api-access-qkftr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.527324 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e4b7d191-e86d-4386-935b-e3ce28794d6d" (UID: "e4b7d191-e86d-4386-935b-e3ce28794d6d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.553965 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e4b7d191-e86d-4386-935b-e3ce28794d6d" (UID: "e4b7d191-e86d-4386-935b-e3ce28794d6d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.554177 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e4b7d191-e86d-4386-935b-e3ce28794d6d" (UID: "e4b7d191-e86d-4386-935b-e3ce28794d6d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.557187 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e4b7d191-e86d-4386-935b-e3ce28794d6d" (UID: "e4b7d191-e86d-4386-935b-e3ce28794d6d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.566334 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-inventory" (OuterVolumeSpecName: "inventory") pod "e4b7d191-e86d-4386-935b-e3ce28794d6d" (UID: "e4b7d191-e86d-4386-935b-e3ce28794d6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.583243 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e4b7d191-e86d-4386-935b-e3ce28794d6d" (UID: "e4b7d191-e86d-4386-935b-e3ce28794d6d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.622246 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.622288 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-inventory\") on node \"crc\" DevicePath \"\"" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.622296 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.622308 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.622318 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.622329 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkftr\" (UniqueName: \"kubernetes.io/projected/e4b7d191-e86d-4386-935b-e3ce28794d6d-kube-api-access-qkftr\") on node \"crc\" DevicePath \"\"" Dec 16 12:48:56 crc kubenswrapper[4805]: I1216 12:48:56.622337 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e4b7d191-e86d-4386-935b-e3ce28794d6d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 16 12:48:57 crc kubenswrapper[4805]: I1216 12:48:57.012233 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" event={"ID":"e4b7d191-e86d-4386-935b-e3ce28794d6d","Type":"ContainerDied","Data":"9ae7d9b890ed625a0ff925f2277588a54af24fd2ed8ba81a2c8f4c49da6c2d85"} Dec 16 12:48:57 crc kubenswrapper[4805]: I1216 12:48:57.012277 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ae7d9b890ed625a0ff925f2277588a54af24fd2ed8ba81a2c8f4c49da6c2d85" Dec 16 12:48:57 crc kubenswrapper[4805]: I1216 12:48:57.012330 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6" Dec 16 12:49:04 crc kubenswrapper[4805]: I1216 12:49:04.525113 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:49:04 crc kubenswrapper[4805]: E1216 12:49:04.526034 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:49:18 crc kubenswrapper[4805]: I1216 12:49:18.523684 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:49:18 crc kubenswrapper[4805]: E1216 12:49:18.524439 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:49:31 crc kubenswrapper[4805]: I1216 12:49:31.522782 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:49:31 crc kubenswrapper[4805]: E1216 12:49:31.523633 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:49:42 crc kubenswrapper[4805]: I1216 12:49:42.523084 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:49:42 crc kubenswrapper[4805]: E1216 12:49:42.523917 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:49:55 crc kubenswrapper[4805]: I1216 12:49:55.523699 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:49:55 crc kubenswrapper[4805]: E1216 12:49:55.524764 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:50:06 crc kubenswrapper[4805]: I1216 12:50:06.530171 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:50:07 crc kubenswrapper[4805]: I1216 12:50:07.746134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"8bd72a242df420e691b50771a7a08f21fc30a06a2f411fb863baaddb9553decd"} Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.616247 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 16 12:50:14 crc kubenswrapper[4805]: E1216 12:50:14.617376 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8401d6ab-3939-43ff-811f-99f1b6a17003" containerName="registry-server" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.617395 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8401d6ab-3939-43ff-811f-99f1b6a17003" containerName="registry-server" Dec 16 12:50:14 crc kubenswrapper[4805]: E1216 12:50:14.617418 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8401d6ab-3939-43ff-811f-99f1b6a17003" containerName="extract-utilities" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.617431 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8401d6ab-3939-43ff-811f-99f1b6a17003" containerName="extract-utilities" Dec 16 12:50:14 crc kubenswrapper[4805]: E1216 12:50:14.617455 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b7d191-e86d-4386-935b-e3ce28794d6d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.617465 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b7d191-e86d-4386-935b-e3ce28794d6d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 16 12:50:14 crc kubenswrapper[4805]: E1216 12:50:14.617481 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8401d6ab-3939-43ff-811f-99f1b6a17003" containerName="extract-content" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.617489 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8401d6ab-3939-43ff-811f-99f1b6a17003" containerName="extract-content" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.617756 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8401d6ab-3939-43ff-811f-99f1b6a17003" containerName="registry-server" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.617784 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b7d191-e86d-4386-935b-e3ce28794d6d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.618746 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.632623 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.632935 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.632962 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.633233 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j58t8" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.641044 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.819483 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.819575 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.819684 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.819709 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnpzs\" (UniqueName: \"kubernetes.io/projected/96a2c3a4-408a-4437-9a22-bc7c41f87222-kube-api-access-vnpzs\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.819808 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.819845 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.819893 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-config-data\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.819919 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.819946 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.922308 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.922369 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnpzs\" (UniqueName: \"kubernetes.io/projected/96a2c3a4-408a-4437-9a22-bc7c41f87222-kube-api-access-vnpzs\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.922461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.922497 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.922557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-config-data\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.922593 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.922621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.922696 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.922736 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.923311 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.923975 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.924008 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.924802 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.924995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-config-data\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.932131 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.932560 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.934273 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.946518 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnpzs\" (UniqueName: \"kubernetes.io/projected/96a2c3a4-408a-4437-9a22-bc7c41f87222-kube-api-access-vnpzs\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:14 crc kubenswrapper[4805]: I1216 12:50:14.965337 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " pod="openstack/tempest-tests-tempest" Dec 16 12:50:15 crc kubenswrapper[4805]: I1216 12:50:15.267205 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 16 12:50:15 crc kubenswrapper[4805]: I1216 12:50:15.774177 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 16 12:50:15 crc kubenswrapper[4805]: I1216 12:50:15.820237 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"96a2c3a4-408a-4437-9a22-bc7c41f87222","Type":"ContainerStarted","Data":"825f3ad74229c7e0549b8648a3c818de26ebe3e27f6677b9525f4f253e7ae2fd"} Dec 16 12:50:59 crc kubenswrapper[4805]: E1216 12:50:59.104277 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 16 12:50:59 crc kubenswrapper[4805]: E1216 12:50:59.105020 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnpzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(96a2c3a4-408a-4437-9a22-bc7c41f87222): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 16 12:50:59 crc kubenswrapper[4805]: E1216 12:50:59.106448 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="96a2c3a4-408a-4437-9a22-bc7c41f87222" Dec 16 12:50:59 crc kubenswrapper[4805]: E1216 12:50:59.127020 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="96a2c3a4-408a-4437-9a22-bc7c41f87222" Dec 16 12:51:10 crc kubenswrapper[4805]: I1216 12:51:10.526955 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 12:51:11 crc kubenswrapper[4805]: I1216 12:51:11.156056 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 16 12:51:12 crc kubenswrapper[4805]: I1216 12:51:12.279384 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"96a2c3a4-408a-4437-9a22-bc7c41f87222","Type":"ContainerStarted","Data":"ad88aef882332705252a369420f41dc2ef2a393857548a02daeec35829767921"} Dec 16 12:51:12 crc kubenswrapper[4805]: I1216 12:51:12.308934 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.935714625 podStartE2EDuration="59.308872466s" podCreationTimestamp="2025-12-16 12:50:13 +0000 UTC" firstStartedPulling="2025-12-16 12:50:15.779466838 +0000 UTC m=+3289.497724643" lastFinishedPulling="2025-12-16 12:51:11.152624679 +0000 UTC m=+3344.870882484" observedRunningTime="2025-12-16 12:51:12.306613611 +0000 UTC m=+3346.024871436" watchObservedRunningTime="2025-12-16 12:51:12.308872466 +0000 UTC m=+3346.027130281" Dec 16 12:51:50 crc kubenswrapper[4805]: I1216 12:51:50.893154 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vmpkf"] Dec 16 12:51:50 crc kubenswrapper[4805]: I1216 12:51:50.901716 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:51:50 crc kubenswrapper[4805]: I1216 12:51:50.939857 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmpkf"] Dec 16 12:51:51 crc kubenswrapper[4805]: I1216 12:51:51.058463 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-catalog-content\") pod \"community-operators-vmpkf\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:51:51 crc kubenswrapper[4805]: I1216 12:51:51.058522 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv676\" (UniqueName: \"kubernetes.io/projected/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-kube-api-access-nv676\") pod \"community-operators-vmpkf\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:51:51 crc kubenswrapper[4805]: I1216 12:51:51.058561 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-utilities\") pod \"community-operators-vmpkf\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:51:51 crc kubenswrapper[4805]: I1216 12:51:51.160405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-catalog-content\") pod \"community-operators-vmpkf\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:51:51 crc kubenswrapper[4805]: I1216 12:51:51.160454 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv676\" (UniqueName: \"kubernetes.io/projected/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-kube-api-access-nv676\") pod \"community-operators-vmpkf\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:51:51 crc kubenswrapper[4805]: I1216 12:51:51.161024 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-catalog-content\") pod \"community-operators-vmpkf\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:51:51 crc kubenswrapper[4805]: I1216 12:51:51.161444 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-utilities\") pod \"community-operators-vmpkf\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:51:51 crc kubenswrapper[4805]: I1216 12:51:51.161844 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-utilities\") pod \"community-operators-vmpkf\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:51:51 crc kubenswrapper[4805]: I1216 12:51:51.188425 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv676\" (UniqueName: \"kubernetes.io/projected/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-kube-api-access-nv676\") pod \"community-operators-vmpkf\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:51:51 crc kubenswrapper[4805]: I1216 12:51:51.234271 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:51:52 crc kubenswrapper[4805]: I1216 12:51:52.152771 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmpkf"] Dec 16 12:51:52 crc kubenswrapper[4805]: I1216 12:51:52.680254 4805 generic.go:334] "Generic (PLEG): container finished" podID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" containerID="80d686b5634903cf8379b74f355ad77ec529b0fbc85f42ef8e5f2e709b81af55" exitCode=0 Dec 16 12:51:52 crc kubenswrapper[4805]: I1216 12:51:52.680345 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmpkf" event={"ID":"2d83af4c-cd3f-43f5-83a1-470f29d4ca53","Type":"ContainerDied","Data":"80d686b5634903cf8379b74f355ad77ec529b0fbc85f42ef8e5f2e709b81af55"} Dec 16 12:51:52 crc kubenswrapper[4805]: I1216 12:51:52.680950 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmpkf" event={"ID":"2d83af4c-cd3f-43f5-83a1-470f29d4ca53","Type":"ContainerStarted","Data":"09f2c418523fbded6c41bb94cbdfc4955f478ea7129d6affe0f4685a21d3a52a"} Dec 16 12:51:53 crc kubenswrapper[4805]: I1216 12:51:53.697280 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmpkf" event={"ID":"2d83af4c-cd3f-43f5-83a1-470f29d4ca53","Type":"ContainerStarted","Data":"78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15"} Dec 16 12:51:55 crc kubenswrapper[4805]: I1216 12:51:55.723207 4805 generic.go:334] "Generic (PLEG): container finished" podID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" containerID="78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15" exitCode=0 Dec 16 12:51:55 crc kubenswrapper[4805]: I1216 12:51:55.723471 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmpkf" event={"ID":"2d83af4c-cd3f-43f5-83a1-470f29d4ca53","Type":"ContainerDied","Data":"78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15"} Dec 16 12:51:56 crc kubenswrapper[4805]: I1216 12:51:56.735644 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmpkf" event={"ID":"2d83af4c-cd3f-43f5-83a1-470f29d4ca53","Type":"ContainerStarted","Data":"b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be"} Dec 16 12:51:56 crc kubenswrapper[4805]: I1216 12:51:56.762924 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vmpkf" podStartSLOduration=3.188875209 podStartE2EDuration="6.7629034s" podCreationTimestamp="2025-12-16 12:51:50 +0000 UTC" firstStartedPulling="2025-12-16 12:51:52.682966468 +0000 UTC m=+3386.401224273" lastFinishedPulling="2025-12-16 12:51:56.256994659 +0000 UTC m=+3389.975252464" observedRunningTime="2025-12-16 12:51:56.75625188 +0000 UTC m=+3390.474509685" watchObservedRunningTime="2025-12-16 12:51:56.7629034 +0000 UTC m=+3390.481161235" Dec 16 12:52:01 crc kubenswrapper[4805]: I1216 12:52:01.235175 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:52:01 crc kubenswrapper[4805]: I1216 12:52:01.236533 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:52:01 crc kubenswrapper[4805]: I1216 12:52:01.296113 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:52:01 crc kubenswrapper[4805]: I1216 12:52:01.860990 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:52:01 crc kubenswrapper[4805]: I1216 12:52:01.935969 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmpkf"] Dec 16 12:52:03 crc kubenswrapper[4805]: I1216 12:52:03.800969 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vmpkf" podUID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" containerName="registry-server" containerID="cri-o://b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be" gracePeriod=2 Dec 16 12:52:03 crc kubenswrapper[4805]: I1216 12:52:03.968605 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f6cxb"] Dec 16 12:52:03 crc kubenswrapper[4805]: I1216 12:52:03.970823 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:03 crc kubenswrapper[4805]: I1216 12:52:03.982259 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6cxb"] Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.079200 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-utilities\") pod \"redhat-marketplace-f6cxb\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.079775 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptqj8\" (UniqueName: \"kubernetes.io/projected/6058e6f5-6206-40ab-8f80-a9c5426e2b32-kube-api-access-ptqj8\") pod \"redhat-marketplace-f6cxb\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.079967 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-catalog-content\") pod \"redhat-marketplace-f6cxb\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.185148 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptqj8\" (UniqueName: \"kubernetes.io/projected/6058e6f5-6206-40ab-8f80-a9c5426e2b32-kube-api-access-ptqj8\") pod \"redhat-marketplace-f6cxb\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.185298 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-catalog-content\") pod \"redhat-marketplace-f6cxb\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.185349 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-utilities\") pod \"redhat-marketplace-f6cxb\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.185988 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-utilities\") pod \"redhat-marketplace-f6cxb\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.193643 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-catalog-content\") pod \"redhat-marketplace-f6cxb\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.226616 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptqj8\" (UniqueName: \"kubernetes.io/projected/6058e6f5-6206-40ab-8f80-a9c5426e2b32-kube-api-access-ptqj8\") pod \"redhat-marketplace-f6cxb\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.343740 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.551037 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.696934 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-utilities\") pod \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.697300 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-catalog-content\") pod \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.697538 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv676\" (UniqueName: \"kubernetes.io/projected/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-kube-api-access-nv676\") pod \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\" (UID: \"2d83af4c-cd3f-43f5-83a1-470f29d4ca53\") " Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.700111 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-utilities" (OuterVolumeSpecName: "utilities") pod "2d83af4c-cd3f-43f5-83a1-470f29d4ca53" (UID: "2d83af4c-cd3f-43f5-83a1-470f29d4ca53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.711882 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-kube-api-access-nv676" (OuterVolumeSpecName: "kube-api-access-nv676") pod "2d83af4c-cd3f-43f5-83a1-470f29d4ca53" (UID: "2d83af4c-cd3f-43f5-83a1-470f29d4ca53"). InnerVolumeSpecName "kube-api-access-nv676". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.786868 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d83af4c-cd3f-43f5-83a1-470f29d4ca53" (UID: "2d83af4c-cd3f-43f5-83a1-470f29d4ca53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.801133 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.801198 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.801214 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv676\" (UniqueName: \"kubernetes.io/projected/2d83af4c-cd3f-43f5-83a1-470f29d4ca53-kube-api-access-nv676\") on node \"crc\" DevicePath \"\"" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.821965 4805 generic.go:334] "Generic (PLEG): container finished" podID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" containerID="b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be" exitCode=0 Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.822006 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmpkf" event={"ID":"2d83af4c-cd3f-43f5-83a1-470f29d4ca53","Type":"ContainerDied","Data":"b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be"} Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.822036 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmpkf" event={"ID":"2d83af4c-cd3f-43f5-83a1-470f29d4ca53","Type":"ContainerDied","Data":"09f2c418523fbded6c41bb94cbdfc4955f478ea7129d6affe0f4685a21d3a52a"} Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.822069 4805 scope.go:117] "RemoveContainer" containerID="b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.822245 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmpkf" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.887044 4805 scope.go:117] "RemoveContainer" containerID="78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.892991 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmpkf"] Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.934003 4805 scope.go:117] "RemoveContainer" containerID="80d686b5634903cf8379b74f355ad77ec529b0fbc85f42ef8e5f2e709b81af55" Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.939575 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vmpkf"] Dec 16 12:52:04 crc kubenswrapper[4805]: I1216 12:52:04.960911 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6cxb"] Dec 16 12:52:05 crc kubenswrapper[4805]: I1216 12:52:05.057853 4805 scope.go:117] "RemoveContainer" containerID="b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be" Dec 16 12:52:05 crc kubenswrapper[4805]: E1216 12:52:05.058631 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be\": container with ID starting with b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be not found: ID does not exist" containerID="b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be" Dec 16 12:52:05 crc kubenswrapper[4805]: I1216 12:52:05.058754 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be"} err="failed to get container status \"b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be\": rpc error: code = NotFound desc = could not find container \"b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be\": container with ID starting with b28f2fd1a9ffadc35b9e01fd89ad83d9822d84aa5b72f738b19f81c97719d0be not found: ID does not exist" Dec 16 12:52:05 crc kubenswrapper[4805]: I1216 12:52:05.058870 4805 scope.go:117] "RemoveContainer" containerID="78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15" Dec 16 12:52:05 crc kubenswrapper[4805]: E1216 12:52:05.059805 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15\": container with ID starting with 78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15 not found: ID does not exist" containerID="78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15" Dec 16 12:52:05 crc kubenswrapper[4805]: I1216 12:52:05.059915 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15"} err="failed to get container status \"78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15\": rpc error: code = NotFound desc = could not find container \"78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15\": container with ID starting with 78dcf6f204f4b86fd79314da18ad0ac2f7002008f8bcf011daf2fb5226428e15 not found: ID does not exist" Dec 16 12:52:05 crc kubenswrapper[4805]: I1216 12:52:05.060002 4805 scope.go:117] "RemoveContainer" containerID="80d686b5634903cf8379b74f355ad77ec529b0fbc85f42ef8e5f2e709b81af55" Dec 16 12:52:05 crc kubenswrapper[4805]: E1216 12:52:05.060968 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80d686b5634903cf8379b74f355ad77ec529b0fbc85f42ef8e5f2e709b81af55\": container with ID starting with 80d686b5634903cf8379b74f355ad77ec529b0fbc85f42ef8e5f2e709b81af55 not found: ID does not exist" containerID="80d686b5634903cf8379b74f355ad77ec529b0fbc85f42ef8e5f2e709b81af55" Dec 16 12:52:05 crc kubenswrapper[4805]: I1216 12:52:05.061085 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80d686b5634903cf8379b74f355ad77ec529b0fbc85f42ef8e5f2e709b81af55"} err="failed to get container status \"80d686b5634903cf8379b74f355ad77ec529b0fbc85f42ef8e5f2e709b81af55\": rpc error: code = NotFound desc = could not find container \"80d686b5634903cf8379b74f355ad77ec529b0fbc85f42ef8e5f2e709b81af55\": container with ID starting with 80d686b5634903cf8379b74f355ad77ec529b0fbc85f42ef8e5f2e709b81af55 not found: ID does not exist" Dec 16 12:52:05 crc kubenswrapper[4805]: I1216 12:52:05.834882 4805 generic.go:334] "Generic (PLEG): container finished" podID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" containerID="5dd2b6ce3c4289c17b65dd4953ca345d3f0ab1776a3933e33e7b5ce3f8813f57" exitCode=0 Dec 16 12:52:05 crc kubenswrapper[4805]: I1216 12:52:05.834973 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6cxb" event={"ID":"6058e6f5-6206-40ab-8f80-a9c5426e2b32","Type":"ContainerDied","Data":"5dd2b6ce3c4289c17b65dd4953ca345d3f0ab1776a3933e33e7b5ce3f8813f57"} Dec 16 12:52:05 crc kubenswrapper[4805]: I1216 12:52:05.835009 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6cxb" event={"ID":"6058e6f5-6206-40ab-8f80-a9c5426e2b32","Type":"ContainerStarted","Data":"1a2fc7d784b6b4c330582eaf179e4aa8872bbfd846b9b4b5f390194d1806e844"} Dec 16 12:52:06 crc kubenswrapper[4805]: I1216 12:52:06.545300 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" path="/var/lib/kubelet/pods/2d83af4c-cd3f-43f5-83a1-470f29d4ca53/volumes" Dec 16 12:52:06 crc kubenswrapper[4805]: I1216 12:52:06.853424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6cxb" event={"ID":"6058e6f5-6206-40ab-8f80-a9c5426e2b32","Type":"ContainerStarted","Data":"824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9"} Dec 16 12:52:07 crc kubenswrapper[4805]: I1216 12:52:07.863689 4805 generic.go:334] "Generic (PLEG): container finished" podID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" containerID="824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9" exitCode=0 Dec 16 12:52:07 crc kubenswrapper[4805]: I1216 12:52:07.863747 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6cxb" event={"ID":"6058e6f5-6206-40ab-8f80-a9c5426e2b32","Type":"ContainerDied","Data":"824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9"} Dec 16 12:52:08 crc kubenswrapper[4805]: I1216 12:52:08.876919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6cxb" event={"ID":"6058e6f5-6206-40ab-8f80-a9c5426e2b32","Type":"ContainerStarted","Data":"68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc"} Dec 16 12:52:14 crc kubenswrapper[4805]: I1216 12:52:14.345075 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:14 crc kubenswrapper[4805]: I1216 12:52:14.347638 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:14 crc kubenswrapper[4805]: I1216 12:52:14.411760 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:14 crc kubenswrapper[4805]: I1216 12:52:14.433860 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f6cxb" podStartSLOduration=8.703324854 podStartE2EDuration="11.43383973s" podCreationTimestamp="2025-12-16 12:52:03 +0000 UTC" firstStartedPulling="2025-12-16 12:52:05.838270862 +0000 UTC m=+3399.556528667" lastFinishedPulling="2025-12-16 12:52:08.568785738 +0000 UTC m=+3402.287043543" observedRunningTime="2025-12-16 12:52:08.90286337 +0000 UTC m=+3402.621121165" watchObservedRunningTime="2025-12-16 12:52:14.43383973 +0000 UTC m=+3408.152097545" Dec 16 12:52:15 crc kubenswrapper[4805]: I1216 12:52:15.017430 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:15 crc kubenswrapper[4805]: I1216 12:52:15.106260 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6cxb"] Dec 16 12:52:17 crc kubenswrapper[4805]: I1216 12:52:17.105969 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f6cxb" podUID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" containerName="registry-server" containerID="cri-o://68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc" gracePeriod=2 Dec 16 12:52:17 crc kubenswrapper[4805]: I1216 12:52:17.784900 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:17 crc kubenswrapper[4805]: I1216 12:52:17.916442 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-utilities\") pod \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " Dec 16 12:52:17 crc kubenswrapper[4805]: I1216 12:52:17.917339 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-utilities" (OuterVolumeSpecName: "utilities") pod "6058e6f5-6206-40ab-8f80-a9c5426e2b32" (UID: "6058e6f5-6206-40ab-8f80-a9c5426e2b32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:52:17 crc kubenswrapper[4805]: I1216 12:52:17.917410 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptqj8\" (UniqueName: \"kubernetes.io/projected/6058e6f5-6206-40ab-8f80-a9c5426e2b32-kube-api-access-ptqj8\") pod \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " Dec 16 12:52:17 crc kubenswrapper[4805]: I1216 12:52:17.917571 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-catalog-content\") pod \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\" (UID: \"6058e6f5-6206-40ab-8f80-a9c5426e2b32\") " Dec 16 12:52:17 crc kubenswrapper[4805]: I1216 12:52:17.927223 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:52:17 crc kubenswrapper[4805]: I1216 12:52:17.933984 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6058e6f5-6206-40ab-8f80-a9c5426e2b32-kube-api-access-ptqj8" (OuterVolumeSpecName: "kube-api-access-ptqj8") pod "6058e6f5-6206-40ab-8f80-a9c5426e2b32" (UID: "6058e6f5-6206-40ab-8f80-a9c5426e2b32"). InnerVolumeSpecName "kube-api-access-ptqj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:52:17 crc kubenswrapper[4805]: I1216 12:52:17.940608 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6058e6f5-6206-40ab-8f80-a9c5426e2b32" (UID: "6058e6f5-6206-40ab-8f80-a9c5426e2b32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.029116 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptqj8\" (UniqueName: \"kubernetes.io/projected/6058e6f5-6206-40ab-8f80-a9c5426e2b32-kube-api-access-ptqj8\") on node \"crc\" DevicePath \"\"" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.029441 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6058e6f5-6206-40ab-8f80-a9c5426e2b32-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.118543 4805 generic.go:334] "Generic (PLEG): container finished" podID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" containerID="68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc" exitCode=0 Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.118597 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6cxb" event={"ID":"6058e6f5-6206-40ab-8f80-a9c5426e2b32","Type":"ContainerDied","Data":"68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc"} Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.118627 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6cxb" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.118655 4805 scope.go:117] "RemoveContainer" containerID="68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.118638 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6cxb" event={"ID":"6058e6f5-6206-40ab-8f80-a9c5426e2b32","Type":"ContainerDied","Data":"1a2fc7d784b6b4c330582eaf179e4aa8872bbfd846b9b4b5f390194d1806e844"} Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.159305 4805 scope.go:117] "RemoveContainer" containerID="824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.184007 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6cxb"] Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.188903 4805 scope.go:117] "RemoveContainer" containerID="5dd2b6ce3c4289c17b65dd4953ca345d3f0ab1776a3933e33e7b5ce3f8813f57" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.193325 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6cxb"] Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.247569 4805 scope.go:117] "RemoveContainer" containerID="68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc" Dec 16 12:52:18 crc kubenswrapper[4805]: E1216 12:52:18.248077 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc\": container with ID starting with 68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc not found: ID does not exist" containerID="68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.248108 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc"} err="failed to get container status \"68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc\": rpc error: code = NotFound desc = could not find container \"68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc\": container with ID starting with 68c5ac4ea1d330d472f4265a9c86a46750873c0f44281eb87e01a15e0ed3e4bc not found: ID does not exist" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.248128 4805 scope.go:117] "RemoveContainer" containerID="824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9" Dec 16 12:52:18 crc kubenswrapper[4805]: E1216 12:52:18.248472 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9\": container with ID starting with 824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9 not found: ID does not exist" containerID="824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.248496 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9"} err="failed to get container status \"824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9\": rpc error: code = NotFound desc = could not find container \"824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9\": container with ID starting with 824cd3d95eb647916af1e9ee5e49b8e513e670dde4c6125658416c2e757fe1a9 not found: ID does not exist" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.248510 4805 scope.go:117] "RemoveContainer" containerID="5dd2b6ce3c4289c17b65dd4953ca345d3f0ab1776a3933e33e7b5ce3f8813f57" Dec 16 12:52:18 crc kubenswrapper[4805]: E1216 12:52:18.252362 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd2b6ce3c4289c17b65dd4953ca345d3f0ab1776a3933e33e7b5ce3f8813f57\": container with ID starting with 5dd2b6ce3c4289c17b65dd4953ca345d3f0ab1776a3933e33e7b5ce3f8813f57 not found: ID does not exist" containerID="5dd2b6ce3c4289c17b65dd4953ca345d3f0ab1776a3933e33e7b5ce3f8813f57" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.252413 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd2b6ce3c4289c17b65dd4953ca345d3f0ab1776a3933e33e7b5ce3f8813f57"} err="failed to get container status \"5dd2b6ce3c4289c17b65dd4953ca345d3f0ab1776a3933e33e7b5ce3f8813f57\": rpc error: code = NotFound desc = could not find container \"5dd2b6ce3c4289c17b65dd4953ca345d3f0ab1776a3933e33e7b5ce3f8813f57\": container with ID starting with 5dd2b6ce3c4289c17b65dd4953ca345d3f0ab1776a3933e33e7b5ce3f8813f57 not found: ID does not exist" Dec 16 12:52:18 crc kubenswrapper[4805]: I1216 12:52:18.535124 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" path="/var/lib/kubelet/pods/6058e6f5-6206-40ab-8f80-a9c5426e2b32/volumes" Dec 16 12:52:27 crc kubenswrapper[4805]: I1216 12:52:27.071922 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:52:27 crc kubenswrapper[4805]: I1216 12:52:27.072572 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:52:57 crc kubenswrapper[4805]: I1216 12:52:57.071454 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:52:57 crc kubenswrapper[4805]: I1216 12:52:57.073285 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:53:27 crc kubenswrapper[4805]: I1216 12:53:27.071720 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:53:27 crc kubenswrapper[4805]: I1216 12:53:27.072309 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:53:27 crc kubenswrapper[4805]: I1216 12:53:27.072354 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:53:27 crc kubenswrapper[4805]: I1216 12:53:27.073169 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bd72a242df420e691b50771a7a08f21fc30a06a2f411fb863baaddb9553decd"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:53:27 crc kubenswrapper[4805]: I1216 12:53:27.073228 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://8bd72a242df420e691b50771a7a08f21fc30a06a2f411fb863baaddb9553decd" gracePeriod=600 Dec 16 12:53:27 crc kubenswrapper[4805]: I1216 12:53:27.998789 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="8bd72a242df420e691b50771a7a08f21fc30a06a2f411fb863baaddb9553decd" exitCode=0 Dec 16 12:53:27 crc kubenswrapper[4805]: I1216 12:53:27.998903 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"8bd72a242df420e691b50771a7a08f21fc30a06a2f411fb863baaddb9553decd"} Dec 16 12:53:27 crc kubenswrapper[4805]: I1216 12:53:27.999250 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d"} Dec 16 12:53:27 crc kubenswrapper[4805]: I1216 12:53:27.999324 4805 scope.go:117] "RemoveContainer" containerID="a65097f639df01a1dc27e1f86983f4195322e3b8da666f8dffdd1eb84b6baacb" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.487382 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l78r7"] Dec 16 12:54:04 crc kubenswrapper[4805]: E1216 12:54:04.488593 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" containerName="registry-server" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.488615 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" containerName="registry-server" Dec 16 12:54:04 crc kubenswrapper[4805]: E1216 12:54:04.488629 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" containerName="registry-server" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.488637 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" containerName="registry-server" Dec 16 12:54:04 crc kubenswrapper[4805]: E1216 12:54:04.488659 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" containerName="extract-utilities" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.488666 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" containerName="extract-utilities" Dec 16 12:54:04 crc kubenswrapper[4805]: E1216 12:54:04.488681 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" containerName="extract-content" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.488688 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" containerName="extract-content" Dec 16 12:54:04 crc kubenswrapper[4805]: E1216 12:54:04.488701 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" containerName="extract-utilities" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.488708 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" containerName="extract-utilities" Dec 16 12:54:04 crc kubenswrapper[4805]: E1216 12:54:04.488739 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" containerName="extract-content" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.488747 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" containerName="extract-content" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.488993 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d83af4c-cd3f-43f5-83a1-470f29d4ca53" containerName="registry-server" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.489022 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6058e6f5-6206-40ab-8f80-a9c5426e2b32" containerName="registry-server" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.490665 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.517623 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l78r7"] Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.627846 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-catalog-content\") pod \"redhat-operators-l78r7\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.627923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j44rv\" (UniqueName: \"kubernetes.io/projected/e60e4401-e229-4786-b3be-dd0f2da694d6-kube-api-access-j44rv\") pod \"redhat-operators-l78r7\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.628611 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-utilities\") pod \"redhat-operators-l78r7\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.730898 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-catalog-content\") pod \"redhat-operators-l78r7\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.730975 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j44rv\" (UniqueName: \"kubernetes.io/projected/e60e4401-e229-4786-b3be-dd0f2da694d6-kube-api-access-j44rv\") pod \"redhat-operators-l78r7\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.731071 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-utilities\") pod \"redhat-operators-l78r7\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.731868 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-utilities\") pod \"redhat-operators-l78r7\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.731864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-catalog-content\") pod \"redhat-operators-l78r7\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.755672 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j44rv\" (UniqueName: \"kubernetes.io/projected/e60e4401-e229-4786-b3be-dd0f2da694d6-kube-api-access-j44rv\") pod \"redhat-operators-l78r7\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:04 crc kubenswrapper[4805]: I1216 12:54:04.819722 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:05 crc kubenswrapper[4805]: I1216 12:54:05.374782 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l78r7"] Dec 16 12:54:05 crc kubenswrapper[4805]: I1216 12:54:05.516462 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l78r7" event={"ID":"e60e4401-e229-4786-b3be-dd0f2da694d6","Type":"ContainerStarted","Data":"22142531cfec4da04dd83ea697f018d865cfc6e886c41c80d4d64cf6888aa0d8"} Dec 16 12:54:06 crc kubenswrapper[4805]: I1216 12:54:06.535760 4805 generic.go:334] "Generic (PLEG): container finished" podID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerID="0721c61264beb100e978e1ce3a4feb28d2e18db64955ea0ebfb0f250d8e10737" exitCode=0 Dec 16 12:54:06 crc kubenswrapper[4805]: I1216 12:54:06.536016 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l78r7" event={"ID":"e60e4401-e229-4786-b3be-dd0f2da694d6","Type":"ContainerDied","Data":"0721c61264beb100e978e1ce3a4feb28d2e18db64955ea0ebfb0f250d8e10737"} Dec 16 12:54:07 crc kubenswrapper[4805]: I1216 12:54:07.546254 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l78r7" event={"ID":"e60e4401-e229-4786-b3be-dd0f2da694d6","Type":"ContainerStarted","Data":"cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce"} Dec 16 12:54:12 crc kubenswrapper[4805]: I1216 12:54:12.599200 4805 generic.go:334] "Generic (PLEG): container finished" podID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerID="cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce" exitCode=0 Dec 16 12:54:12 crc kubenswrapper[4805]: I1216 12:54:12.599271 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l78r7" event={"ID":"e60e4401-e229-4786-b3be-dd0f2da694d6","Type":"ContainerDied","Data":"cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce"} Dec 16 12:54:13 crc kubenswrapper[4805]: I1216 12:54:13.609663 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l78r7" event={"ID":"e60e4401-e229-4786-b3be-dd0f2da694d6","Type":"ContainerStarted","Data":"499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c"} Dec 16 12:54:13 crc kubenswrapper[4805]: I1216 12:54:13.641926 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l78r7" podStartSLOduration=3.002496963 podStartE2EDuration="9.641890625s" podCreationTimestamp="2025-12-16 12:54:04 +0000 UTC" firstStartedPulling="2025-12-16 12:54:06.538083944 +0000 UTC m=+3520.256341759" lastFinishedPulling="2025-12-16 12:54:13.177477616 +0000 UTC m=+3526.895735421" observedRunningTime="2025-12-16 12:54:13.631415105 +0000 UTC m=+3527.349672930" watchObservedRunningTime="2025-12-16 12:54:13.641890625 +0000 UTC m=+3527.360148440" Dec 16 12:54:14 crc kubenswrapper[4805]: I1216 12:54:14.820423 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:14 crc kubenswrapper[4805]: I1216 12:54:14.820959 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:15 crc kubenswrapper[4805]: I1216 12:54:15.867846 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l78r7" podUID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerName="registry-server" probeResult="failure" output=< Dec 16 12:54:15 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 12:54:15 crc kubenswrapper[4805]: > Dec 16 12:54:24 crc kubenswrapper[4805]: I1216 12:54:24.882769 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:24 crc kubenswrapper[4805]: I1216 12:54:24.943495 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:25 crc kubenswrapper[4805]: I1216 12:54:25.129133 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l78r7"] Dec 16 12:54:26 crc kubenswrapper[4805]: I1216 12:54:26.747315 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l78r7" podUID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerName="registry-server" containerID="cri-o://499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c" gracePeriod=2 Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.671126 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.687239 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j44rv\" (UniqueName: \"kubernetes.io/projected/e60e4401-e229-4786-b3be-dd0f2da694d6-kube-api-access-j44rv\") pod \"e60e4401-e229-4786-b3be-dd0f2da694d6\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.687372 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-utilities\") pod \"e60e4401-e229-4786-b3be-dd0f2da694d6\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.687405 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-catalog-content\") pod \"e60e4401-e229-4786-b3be-dd0f2da694d6\" (UID: \"e60e4401-e229-4786-b3be-dd0f2da694d6\") " Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.688351 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-utilities" (OuterVolumeSpecName: "utilities") pod "e60e4401-e229-4786-b3be-dd0f2da694d6" (UID: "e60e4401-e229-4786-b3be-dd0f2da694d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.710489 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e60e4401-e229-4786-b3be-dd0f2da694d6-kube-api-access-j44rv" (OuterVolumeSpecName: "kube-api-access-j44rv") pod "e60e4401-e229-4786-b3be-dd0f2da694d6" (UID: "e60e4401-e229-4786-b3be-dd0f2da694d6"). InnerVolumeSpecName "kube-api-access-j44rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.768628 4805 generic.go:334] "Generic (PLEG): container finished" podID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerID="499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c" exitCode=0 Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.768673 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l78r7" event={"ID":"e60e4401-e229-4786-b3be-dd0f2da694d6","Type":"ContainerDied","Data":"499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c"} Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.768704 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l78r7" event={"ID":"e60e4401-e229-4786-b3be-dd0f2da694d6","Type":"ContainerDied","Data":"22142531cfec4da04dd83ea697f018d865cfc6e886c41c80d4d64cf6888aa0d8"} Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.768735 4805 scope.go:117] "RemoveContainer" containerID="499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.768904 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l78r7" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.794423 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j44rv\" (UniqueName: \"kubernetes.io/projected/e60e4401-e229-4786-b3be-dd0f2da694d6-kube-api-access-j44rv\") on node \"crc\" DevicePath \"\"" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.794461 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.803038 4805 scope.go:117] "RemoveContainer" containerID="cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.837933 4805 scope.go:117] "RemoveContainer" containerID="0721c61264beb100e978e1ce3a4feb28d2e18db64955ea0ebfb0f250d8e10737" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.890671 4805 scope.go:117] "RemoveContainer" containerID="499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c" Dec 16 12:54:27 crc kubenswrapper[4805]: E1216 12:54:27.891269 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c\": container with ID starting with 499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c not found: ID does not exist" containerID="499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.891321 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c"} err="failed to get container status \"499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c\": rpc error: code = NotFound desc = could not find container \"499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c\": container with ID starting with 499f382fed34dbae18e77c2345936cccf79b4d2fc6f6bac2a2de2865ddbec14c not found: ID does not exist" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.891351 4805 scope.go:117] "RemoveContainer" containerID="cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce" Dec 16 12:54:27 crc kubenswrapper[4805]: E1216 12:54:27.891846 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce\": container with ID starting with cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce not found: ID does not exist" containerID="cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.891917 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce"} err="failed to get container status \"cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce\": rpc error: code = NotFound desc = could not find container \"cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce\": container with ID starting with cf568cf0729a48ecdf4b4d41c12966af593738fd59d1d1eb0e8c6366bebe15ce not found: ID does not exist" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.891970 4805 scope.go:117] "RemoveContainer" containerID="0721c61264beb100e978e1ce3a4feb28d2e18db64955ea0ebfb0f250d8e10737" Dec 16 12:54:27 crc kubenswrapper[4805]: E1216 12:54:27.892435 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0721c61264beb100e978e1ce3a4feb28d2e18db64955ea0ebfb0f250d8e10737\": container with ID starting with 0721c61264beb100e978e1ce3a4feb28d2e18db64955ea0ebfb0f250d8e10737 not found: ID does not exist" containerID="0721c61264beb100e978e1ce3a4feb28d2e18db64955ea0ebfb0f250d8e10737" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.892482 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0721c61264beb100e978e1ce3a4feb28d2e18db64955ea0ebfb0f250d8e10737"} err="failed to get container status \"0721c61264beb100e978e1ce3a4feb28d2e18db64955ea0ebfb0f250d8e10737\": rpc error: code = NotFound desc = could not find container \"0721c61264beb100e978e1ce3a4feb28d2e18db64955ea0ebfb0f250d8e10737\": container with ID starting with 0721c61264beb100e978e1ce3a4feb28d2e18db64955ea0ebfb0f250d8e10737 not found: ID does not exist" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.923754 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e60e4401-e229-4786-b3be-dd0f2da694d6" (UID: "e60e4401-e229-4786-b3be-dd0f2da694d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:54:27 crc kubenswrapper[4805]: I1216 12:54:27.998002 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e60e4401-e229-4786-b3be-dd0f2da694d6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:54:28 crc kubenswrapper[4805]: I1216 12:54:28.124493 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l78r7"] Dec 16 12:54:28 crc kubenswrapper[4805]: I1216 12:54:28.142069 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l78r7"] Dec 16 12:54:28 crc kubenswrapper[4805]: I1216 12:54:28.534043 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e60e4401-e229-4786-b3be-dd0f2da694d6" path="/var/lib/kubelet/pods/e60e4401-e229-4786-b3be-dd0f2da694d6/volumes" Dec 16 12:55:27 crc kubenswrapper[4805]: I1216 12:55:27.071650 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:55:27 crc kubenswrapper[4805]: I1216 12:55:27.072372 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:55:57 crc kubenswrapper[4805]: I1216 12:55:57.071342 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:55:57 crc kubenswrapper[4805]: I1216 12:55:57.073133 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:55:59 crc kubenswrapper[4805]: I1216 12:55:59.961711 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2qttf"] Dec 16 12:55:59 crc kubenswrapper[4805]: E1216 12:55:59.962628 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerName="extract-utilities" Dec 16 12:55:59 crc kubenswrapper[4805]: I1216 12:55:59.962642 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerName="extract-utilities" Dec 16 12:55:59 crc kubenswrapper[4805]: E1216 12:55:59.962657 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerName="extract-content" Dec 16 12:55:59 crc kubenswrapper[4805]: I1216 12:55:59.962664 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerName="extract-content" Dec 16 12:55:59 crc kubenswrapper[4805]: E1216 12:55:59.962678 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerName="registry-server" Dec 16 12:55:59 crc kubenswrapper[4805]: I1216 12:55:59.962684 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerName="registry-server" Dec 16 12:55:59 crc kubenswrapper[4805]: I1216 12:55:59.962904 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60e4401-e229-4786-b3be-dd0f2da694d6" containerName="registry-server" Dec 16 12:55:59 crc kubenswrapper[4805]: I1216 12:55:59.964381 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:55:59 crc kubenswrapper[4805]: I1216 12:55:59.975702 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2qttf"] Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.145180 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzvr\" (UniqueName: \"kubernetes.io/projected/f1a1e938-6853-4b58-a229-14c9561bf260-kube-api-access-kkzvr\") pod \"certified-operators-2qttf\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.145253 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-catalog-content\") pod \"certified-operators-2qttf\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.145355 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-utilities\") pod \"certified-operators-2qttf\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.247726 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-utilities\") pod \"certified-operators-2qttf\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.247944 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkzvr\" (UniqueName: \"kubernetes.io/projected/f1a1e938-6853-4b58-a229-14c9561bf260-kube-api-access-kkzvr\") pod \"certified-operators-2qttf\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.248026 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-catalog-content\") pod \"certified-operators-2qttf\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.248309 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-utilities\") pod \"certified-operators-2qttf\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.248385 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-catalog-content\") pod \"certified-operators-2qttf\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.275538 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkzvr\" (UniqueName: \"kubernetes.io/projected/f1a1e938-6853-4b58-a229-14c9561bf260-kube-api-access-kkzvr\") pod \"certified-operators-2qttf\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.289767 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.794503 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2qttf"] Dec 16 12:56:00 crc kubenswrapper[4805]: I1216 12:56:00.849367 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qttf" event={"ID":"f1a1e938-6853-4b58-a229-14c9561bf260","Type":"ContainerStarted","Data":"c4f31d8ac6d783661aac7ae875fcb2b6a6534cfff627490e4ad00b8103bfa438"} Dec 16 12:56:01 crc kubenswrapper[4805]: I1216 12:56:01.861237 4805 generic.go:334] "Generic (PLEG): container finished" podID="f1a1e938-6853-4b58-a229-14c9561bf260" containerID="4b1f0cfa542ca18ded2d1d31b99395e191a2369c3f08d36fae8d8affdef97980" exitCode=0 Dec 16 12:56:01 crc kubenswrapper[4805]: I1216 12:56:01.861293 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qttf" event={"ID":"f1a1e938-6853-4b58-a229-14c9561bf260","Type":"ContainerDied","Data":"4b1f0cfa542ca18ded2d1d31b99395e191a2369c3f08d36fae8d8affdef97980"} Dec 16 12:56:02 crc kubenswrapper[4805]: I1216 12:56:02.871636 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qttf" event={"ID":"f1a1e938-6853-4b58-a229-14c9561bf260","Type":"ContainerStarted","Data":"dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e"} Dec 16 12:56:03 crc kubenswrapper[4805]: I1216 12:56:03.881512 4805 generic.go:334] "Generic (PLEG): container finished" podID="f1a1e938-6853-4b58-a229-14c9561bf260" containerID="dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e" exitCode=0 Dec 16 12:56:03 crc kubenswrapper[4805]: I1216 12:56:03.881606 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qttf" event={"ID":"f1a1e938-6853-4b58-a229-14c9561bf260","Type":"ContainerDied","Data":"dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e"} Dec 16 12:56:05 crc kubenswrapper[4805]: I1216 12:56:05.905690 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qttf" event={"ID":"f1a1e938-6853-4b58-a229-14c9561bf260","Type":"ContainerStarted","Data":"8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3"} Dec 16 12:56:05 crc kubenswrapper[4805]: I1216 12:56:05.942920 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2qttf" podStartSLOduration=3.985334638 podStartE2EDuration="6.9429012s" podCreationTimestamp="2025-12-16 12:55:59 +0000 UTC" firstStartedPulling="2025-12-16 12:56:01.862859396 +0000 UTC m=+3635.581117201" lastFinishedPulling="2025-12-16 12:56:04.820425958 +0000 UTC m=+3638.538683763" observedRunningTime="2025-12-16 12:56:05.933597313 +0000 UTC m=+3639.651855118" watchObservedRunningTime="2025-12-16 12:56:05.9429012 +0000 UTC m=+3639.661159015" Dec 16 12:56:10 crc kubenswrapper[4805]: I1216 12:56:10.290850 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:10 crc kubenswrapper[4805]: I1216 12:56:10.291391 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:10 crc kubenswrapper[4805]: I1216 12:56:10.346732 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:11 crc kubenswrapper[4805]: I1216 12:56:11.012029 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:11 crc kubenswrapper[4805]: I1216 12:56:11.105063 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2qttf"] Dec 16 12:56:12 crc kubenswrapper[4805]: I1216 12:56:12.973320 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2qttf" podUID="f1a1e938-6853-4b58-a229-14c9561bf260" containerName="registry-server" containerID="cri-o://8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3" gracePeriod=2 Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.554596 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.639033 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-utilities\") pod \"f1a1e938-6853-4b58-a229-14c9561bf260\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.639131 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-catalog-content\") pod \"f1a1e938-6853-4b58-a229-14c9561bf260\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.639273 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkzvr\" (UniqueName: \"kubernetes.io/projected/f1a1e938-6853-4b58-a229-14c9561bf260-kube-api-access-kkzvr\") pod \"f1a1e938-6853-4b58-a229-14c9561bf260\" (UID: \"f1a1e938-6853-4b58-a229-14c9561bf260\") " Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.640209 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-utilities" (OuterVolumeSpecName: "utilities") pod "f1a1e938-6853-4b58-a229-14c9561bf260" (UID: "f1a1e938-6853-4b58-a229-14c9561bf260"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.641130 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.654847 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a1e938-6853-4b58-a229-14c9561bf260-kube-api-access-kkzvr" (OuterVolumeSpecName: "kube-api-access-kkzvr") pod "f1a1e938-6853-4b58-a229-14c9561bf260" (UID: "f1a1e938-6853-4b58-a229-14c9561bf260"). InnerVolumeSpecName "kube-api-access-kkzvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.744598 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkzvr\" (UniqueName: \"kubernetes.io/projected/f1a1e938-6853-4b58-a229-14c9561bf260-kube-api-access-kkzvr\") on node \"crc\" DevicePath \"\"" Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.878503 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1a1e938-6853-4b58-a229-14c9561bf260" (UID: "f1a1e938-6853-4b58-a229-14c9561bf260"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.948884 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a1e938-6853-4b58-a229-14c9561bf260-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.982131 4805 generic.go:334] "Generic (PLEG): container finished" podID="f1a1e938-6853-4b58-a229-14c9561bf260" containerID="8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3" exitCode=0 Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.982175 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qttf" event={"ID":"f1a1e938-6853-4b58-a229-14c9561bf260","Type":"ContainerDied","Data":"8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3"} Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.982227 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qttf" event={"ID":"f1a1e938-6853-4b58-a229-14c9561bf260","Type":"ContainerDied","Data":"c4f31d8ac6d783661aac7ae875fcb2b6a6534cfff627490e4ad00b8103bfa438"} Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.982249 4805 scope.go:117] "RemoveContainer" containerID="8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3" Dec 16 12:56:13 crc kubenswrapper[4805]: I1216 12:56:13.982248 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qttf" Dec 16 12:56:14 crc kubenswrapper[4805]: I1216 12:56:14.012363 4805 scope.go:117] "RemoveContainer" containerID="dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e" Dec 16 12:56:14 crc kubenswrapper[4805]: I1216 12:56:14.037044 4805 scope.go:117] "RemoveContainer" containerID="4b1f0cfa542ca18ded2d1d31b99395e191a2369c3f08d36fae8d8affdef97980" Dec 16 12:56:14 crc kubenswrapper[4805]: I1216 12:56:14.047129 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2qttf"] Dec 16 12:56:14 crc kubenswrapper[4805]: I1216 12:56:14.055555 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2qttf"] Dec 16 12:56:14 crc kubenswrapper[4805]: I1216 12:56:14.089009 4805 scope.go:117] "RemoveContainer" containerID="8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3" Dec 16 12:56:14 crc kubenswrapper[4805]: E1216 12:56:14.092571 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3\": container with ID starting with 8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3 not found: ID does not exist" containerID="8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3" Dec 16 12:56:14 crc kubenswrapper[4805]: I1216 12:56:14.092634 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3"} err="failed to get container status \"8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3\": rpc error: code = NotFound desc = could not find container \"8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3\": container with ID starting with 8b099fd0856b0cf9a549936db566db5a68446cf20f5bda227812ad87a02a1ad3 not found: ID does not exist" Dec 16 12:56:14 crc kubenswrapper[4805]: I1216 12:56:14.092670 4805 scope.go:117] "RemoveContainer" containerID="dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e" Dec 16 12:56:14 crc kubenswrapper[4805]: E1216 12:56:14.096438 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e\": container with ID starting with dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e not found: ID does not exist" containerID="dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e" Dec 16 12:56:14 crc kubenswrapper[4805]: I1216 12:56:14.096495 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e"} err="failed to get container status \"dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e\": rpc error: code = NotFound desc = could not find container \"dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e\": container with ID starting with dd2219c5cee2682845a586f17de84057c85f5a859907f068eeff152e80a8296e not found: ID does not exist" Dec 16 12:56:14 crc kubenswrapper[4805]: I1216 12:56:14.096532 4805 scope.go:117] "RemoveContainer" containerID="4b1f0cfa542ca18ded2d1d31b99395e191a2369c3f08d36fae8d8affdef97980" Dec 16 12:56:14 crc kubenswrapper[4805]: E1216 12:56:14.097958 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b1f0cfa542ca18ded2d1d31b99395e191a2369c3f08d36fae8d8affdef97980\": container with ID starting with 4b1f0cfa542ca18ded2d1d31b99395e191a2369c3f08d36fae8d8affdef97980 not found: ID does not exist" containerID="4b1f0cfa542ca18ded2d1d31b99395e191a2369c3f08d36fae8d8affdef97980" Dec 16 12:56:14 crc kubenswrapper[4805]: I1216 12:56:14.098000 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b1f0cfa542ca18ded2d1d31b99395e191a2369c3f08d36fae8d8affdef97980"} err="failed to get container status \"4b1f0cfa542ca18ded2d1d31b99395e191a2369c3f08d36fae8d8affdef97980\": rpc error: code = NotFound desc = could not find container \"4b1f0cfa542ca18ded2d1d31b99395e191a2369c3f08d36fae8d8affdef97980\": container with ID starting with 4b1f0cfa542ca18ded2d1d31b99395e191a2369c3f08d36fae8d8affdef97980 not found: ID does not exist" Dec 16 12:56:14 crc kubenswrapper[4805]: I1216 12:56:14.535692 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a1e938-6853-4b58-a229-14c9561bf260" path="/var/lib/kubelet/pods/f1a1e938-6853-4b58-a229-14c9561bf260/volumes" Dec 16 12:56:27 crc kubenswrapper[4805]: I1216 12:56:27.071515 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 12:56:27 crc kubenswrapper[4805]: I1216 12:56:27.072070 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 12:56:27 crc kubenswrapper[4805]: I1216 12:56:27.072129 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 12:56:27 crc kubenswrapper[4805]: I1216 12:56:27.073060 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 12:56:27 crc kubenswrapper[4805]: I1216 12:56:27.073119 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" gracePeriod=600 Dec 16 12:56:27 crc kubenswrapper[4805]: E1216 12:56:27.196918 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:56:28 crc kubenswrapper[4805]: I1216 12:56:28.111242 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" exitCode=0 Dec 16 12:56:28 crc kubenswrapper[4805]: I1216 12:56:28.111529 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d"} Dec 16 12:56:28 crc kubenswrapper[4805]: I1216 12:56:28.111564 4805 scope.go:117] "RemoveContainer" containerID="8bd72a242df420e691b50771a7a08f21fc30a06a2f411fb863baaddb9553decd" Dec 16 12:56:28 crc kubenswrapper[4805]: I1216 12:56:28.112444 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:56:28 crc kubenswrapper[4805]: E1216 12:56:28.112868 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:56:40 crc kubenswrapper[4805]: I1216 12:56:40.522829 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:56:40 crc kubenswrapper[4805]: E1216 12:56:40.523670 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:56:51 crc kubenswrapper[4805]: I1216 12:56:51.522764 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:56:51 crc kubenswrapper[4805]: E1216 12:56:51.523412 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:57:03 crc kubenswrapper[4805]: I1216 12:57:03.523263 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:57:03 crc kubenswrapper[4805]: E1216 12:57:03.523944 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:57:18 crc kubenswrapper[4805]: I1216 12:57:18.530068 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:57:18 crc kubenswrapper[4805]: E1216 12:57:18.530888 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:57:32 crc kubenswrapper[4805]: I1216 12:57:32.522484 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:57:32 crc kubenswrapper[4805]: E1216 12:57:32.523372 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:57:45 crc kubenswrapper[4805]: I1216 12:57:45.522711 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:57:45 crc kubenswrapper[4805]: E1216 12:57:45.523601 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:57:59 crc kubenswrapper[4805]: I1216 12:57:59.524221 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:57:59 crc kubenswrapper[4805]: E1216 12:57:59.525073 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:58:13 crc kubenswrapper[4805]: I1216 12:58:13.523309 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:58:13 crc kubenswrapper[4805]: E1216 12:58:13.524098 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:58:28 crc kubenswrapper[4805]: I1216 12:58:28.522720 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:58:28 crc kubenswrapper[4805]: E1216 12:58:28.523517 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:58:41 crc kubenswrapper[4805]: I1216 12:58:41.523336 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:58:41 crc kubenswrapper[4805]: E1216 12:58:41.524070 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:58:54 crc kubenswrapper[4805]: I1216 12:58:54.524812 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:58:54 crc kubenswrapper[4805]: E1216 12:58:54.525590 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:59:06 crc kubenswrapper[4805]: I1216 12:59:06.530383 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:59:06 crc kubenswrapper[4805]: E1216 12:59:06.531280 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:59:17 crc kubenswrapper[4805]: I1216 12:59:17.522723 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:59:17 crc kubenswrapper[4805]: E1216 12:59:17.523521 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:59:31 crc kubenswrapper[4805]: I1216 12:59:31.522932 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:59:31 crc kubenswrapper[4805]: E1216 12:59:31.523902 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:59:44 crc kubenswrapper[4805]: I1216 12:59:44.523203 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:59:44 crc kubenswrapper[4805]: E1216 12:59:44.523930 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 12:59:59 crc kubenswrapper[4805]: I1216 12:59:59.522620 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 12:59:59 crc kubenswrapper[4805]: E1216 12:59:59.523527 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.181307 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt"] Dec 16 13:00:00 crc kubenswrapper[4805]: E1216 13:00:00.181906 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a1e938-6853-4b58-a229-14c9561bf260" containerName="registry-server" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.181937 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a1e938-6853-4b58-a229-14c9561bf260" containerName="registry-server" Dec 16 13:00:00 crc kubenswrapper[4805]: E1216 13:00:00.181958 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a1e938-6853-4b58-a229-14c9561bf260" containerName="extract-content" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.181966 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a1e938-6853-4b58-a229-14c9561bf260" containerName="extract-content" Dec 16 13:00:00 crc kubenswrapper[4805]: E1216 13:00:00.181992 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a1e938-6853-4b58-a229-14c9561bf260" containerName="extract-utilities" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.182002 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a1e938-6853-4b58-a229-14c9561bf260" containerName="extract-utilities" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.182308 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a1e938-6853-4b58-a229-14c9561bf260" containerName="registry-server" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.184503 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.187268 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.187267 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.210739 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt"] Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.243192 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mvkt\" (UniqueName: \"kubernetes.io/projected/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-kube-api-access-8mvkt\") pod \"collect-profiles-29431500-6tcbt\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.243376 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-config-volume\") pod \"collect-profiles-29431500-6tcbt\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.243472 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-secret-volume\") pod \"collect-profiles-29431500-6tcbt\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.345744 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mvkt\" (UniqueName: \"kubernetes.io/projected/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-kube-api-access-8mvkt\") pod \"collect-profiles-29431500-6tcbt\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.345854 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-config-volume\") pod \"collect-profiles-29431500-6tcbt\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.345912 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-secret-volume\") pod \"collect-profiles-29431500-6tcbt\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.347069 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-config-volume\") pod \"collect-profiles-29431500-6tcbt\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.365798 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-secret-volume\") pod \"collect-profiles-29431500-6tcbt\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.376096 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mvkt\" (UniqueName: \"kubernetes.io/projected/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-kube-api-access-8mvkt\") pod \"collect-profiles-29431500-6tcbt\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:00 crc kubenswrapper[4805]: I1216 13:00:00.523569 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:01 crc kubenswrapper[4805]: I1216 13:00:01.142710 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt"] Dec 16 13:00:01 crc kubenswrapper[4805]: I1216 13:00:01.505092 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" event={"ID":"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3","Type":"ContainerStarted","Data":"fb622b658f9dd879b3e17e0ff0d31bb3a7523959b506c55994b02a9e263ac212"} Dec 16 13:00:01 crc kubenswrapper[4805]: I1216 13:00:01.505460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" event={"ID":"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3","Type":"ContainerStarted","Data":"5d99ff9952906eada6e3174c8fd81bfee07be6545dbe137ed6c2a9a7588ccbf9"} Dec 16 13:00:02 crc kubenswrapper[4805]: I1216 13:00:02.514909 4805 generic.go:334] "Generic (PLEG): container finished" podID="7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3" containerID="fb622b658f9dd879b3e17e0ff0d31bb3a7523959b506c55994b02a9e263ac212" exitCode=0 Dec 16 13:00:02 crc kubenswrapper[4805]: I1216 13:00:02.515037 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" event={"ID":"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3","Type":"ContainerDied","Data":"fb622b658f9dd879b3e17e0ff0d31bb3a7523959b506c55994b02a9e263ac212"} Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.301891 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.421624 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mvkt\" (UniqueName: \"kubernetes.io/projected/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-kube-api-access-8mvkt\") pod \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.421778 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-config-volume\") pod \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.421844 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-secret-volume\") pod \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\" (UID: \"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3\") " Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.422670 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3" (UID: "7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.429426 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3" (UID: "7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.438613 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-kube-api-access-8mvkt" (OuterVolumeSpecName: "kube-api-access-8mvkt") pod "7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3" (UID: "7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3"). InnerVolumeSpecName "kube-api-access-8mvkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.524495 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-config-volume\") on node \"crc\" DevicePath \"\"" Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.524707 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.524788 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mvkt\" (UniqueName: \"kubernetes.io/projected/7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3-kube-api-access-8mvkt\") on node \"crc\" DevicePath \"\"" Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.544757 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" event={"ID":"7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3","Type":"ContainerDied","Data":"5d99ff9952906eada6e3174c8fd81bfee07be6545dbe137ed6c2a9a7588ccbf9"} Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.544811 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d99ff9952906eada6e3174c8fd81bfee07be6545dbe137ed6c2a9a7588ccbf9" Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.544897 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431500-6tcbt" Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.632313 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx"] Dec 16 13:00:04 crc kubenswrapper[4805]: I1216 13:00:04.642507 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431455-lcnfx"] Dec 16 13:00:06 crc kubenswrapper[4805]: I1216 13:00:06.655083 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d960318b-f654-493a-bc0d-48760f738455" path="/var/lib/kubelet/pods/d960318b-f654-493a-bc0d-48760f738455/volumes" Dec 16 13:00:12 crc kubenswrapper[4805]: I1216 13:00:12.523079 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 13:00:12 crc kubenswrapper[4805]: E1216 13:00:12.523954 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:00:27 crc kubenswrapper[4805]: I1216 13:00:27.522523 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 13:00:27 crc kubenswrapper[4805]: E1216 13:00:27.523381 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:00:41 crc kubenswrapper[4805]: I1216 13:00:41.523115 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 13:00:41 crc kubenswrapper[4805]: E1216 13:00:41.525208 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:00:54 crc kubenswrapper[4805]: I1216 13:00:54.523086 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 13:00:54 crc kubenswrapper[4805]: E1216 13:00:54.523924 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.163269 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29431501-jlgq8"] Dec 16 13:01:00 crc kubenswrapper[4805]: E1216 13:01:00.164392 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3" containerName="collect-profiles" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.164415 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3" containerName="collect-profiles" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.164716 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d70ac0f-2b9b-45b3-a02d-bf4449cb22d3" containerName="collect-profiles" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.167783 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.191620 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29431501-jlgq8"] Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.299500 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-fernet-keys\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.299827 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsq8m\" (UniqueName: \"kubernetes.io/projected/aad6bcdb-7a23-48fa-b79c-69932357cf9f-kube-api-access-nsq8m\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.300084 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-combined-ca-bundle\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.300221 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-config-data\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.402604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-combined-ca-bundle\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.402662 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-config-data\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.402782 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-fernet-keys\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.402810 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsq8m\" (UniqueName: \"kubernetes.io/projected/aad6bcdb-7a23-48fa-b79c-69932357cf9f-kube-api-access-nsq8m\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.411636 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-config-data\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.412991 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-combined-ca-bundle\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.425104 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-fernet-keys\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.428260 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsq8m\" (UniqueName: \"kubernetes.io/projected/aad6bcdb-7a23-48fa-b79c-69932357cf9f-kube-api-access-nsq8m\") pod \"keystone-cron-29431501-jlgq8\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.511750 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:00 crc kubenswrapper[4805]: I1216 13:01:00.902486 4805 scope.go:117] "RemoveContainer" containerID="659406396d073ad7a213223a5c39690649f10afa6eb3d2d3bcffa33538990b27" Dec 16 13:01:01 crc kubenswrapper[4805]: I1216 13:01:01.037487 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29431501-jlgq8"] Dec 16 13:01:01 crc kubenswrapper[4805]: I1216 13:01:01.157603 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29431501-jlgq8" event={"ID":"aad6bcdb-7a23-48fa-b79c-69932357cf9f","Type":"ContainerStarted","Data":"41131c2efd119c626504dcfd6be636324184060a5a928e65ec4a9f595bf6eac7"} Dec 16 13:01:02 crc kubenswrapper[4805]: I1216 13:01:02.170836 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29431501-jlgq8" event={"ID":"aad6bcdb-7a23-48fa-b79c-69932357cf9f","Type":"ContainerStarted","Data":"4ef8d04ce9135ef76fa2b52a456baf5575a9c9e97d6aa6d372cb3c9391a05529"} Dec 16 13:01:03 crc kubenswrapper[4805]: I1216 13:01:03.209295 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29431501-jlgq8" podStartSLOduration=3.209230944 podStartE2EDuration="3.209230944s" podCreationTimestamp="2025-12-16 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:01:03.2006932 +0000 UTC m=+3936.918951005" watchObservedRunningTime="2025-12-16 13:01:03.209230944 +0000 UTC m=+3936.927488759" Dec 16 13:01:06 crc kubenswrapper[4805]: I1216 13:01:06.530922 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 13:01:06 crc kubenswrapper[4805]: E1216 13:01:06.531696 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:01:08 crc kubenswrapper[4805]: I1216 13:01:08.228029 4805 generic.go:334] "Generic (PLEG): container finished" podID="aad6bcdb-7a23-48fa-b79c-69932357cf9f" containerID="4ef8d04ce9135ef76fa2b52a456baf5575a9c9e97d6aa6d372cb3c9391a05529" exitCode=0 Dec 16 13:01:08 crc kubenswrapper[4805]: I1216 13:01:08.228170 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29431501-jlgq8" event={"ID":"aad6bcdb-7a23-48fa-b79c-69932357cf9f","Type":"ContainerDied","Data":"4ef8d04ce9135ef76fa2b52a456baf5575a9c9e97d6aa6d372cb3c9391a05529"} Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.716100 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.828633 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-combined-ca-bundle\") pod \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.828694 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-fernet-keys\") pod \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.828877 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-config-data\") pod \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.828984 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsq8m\" (UniqueName: \"kubernetes.io/projected/aad6bcdb-7a23-48fa-b79c-69932357cf9f-kube-api-access-nsq8m\") pod \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\" (UID: \"aad6bcdb-7a23-48fa-b79c-69932357cf9f\") " Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.845757 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad6bcdb-7a23-48fa-b79c-69932357cf9f-kube-api-access-nsq8m" (OuterVolumeSpecName: "kube-api-access-nsq8m") pod "aad6bcdb-7a23-48fa-b79c-69932357cf9f" (UID: "aad6bcdb-7a23-48fa-b79c-69932357cf9f"). InnerVolumeSpecName "kube-api-access-nsq8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.850649 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "aad6bcdb-7a23-48fa-b79c-69932357cf9f" (UID: "aad6bcdb-7a23-48fa-b79c-69932357cf9f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.861214 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aad6bcdb-7a23-48fa-b79c-69932357cf9f" (UID: "aad6bcdb-7a23-48fa-b79c-69932357cf9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.901509 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-config-data" (OuterVolumeSpecName: "config-data") pod "aad6bcdb-7a23-48fa-b79c-69932357cf9f" (UID: "aad6bcdb-7a23-48fa-b79c-69932357cf9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.930982 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.931025 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.931039 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad6bcdb-7a23-48fa-b79c-69932357cf9f-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 13:01:09 crc kubenswrapper[4805]: I1216 13:01:09.931050 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsq8m\" (UniqueName: \"kubernetes.io/projected/aad6bcdb-7a23-48fa-b79c-69932357cf9f-kube-api-access-nsq8m\") on node \"crc\" DevicePath \"\"" Dec 16 13:01:10 crc kubenswrapper[4805]: I1216 13:01:10.249398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29431501-jlgq8" event={"ID":"aad6bcdb-7a23-48fa-b79c-69932357cf9f","Type":"ContainerDied","Data":"41131c2efd119c626504dcfd6be636324184060a5a928e65ec4a9f595bf6eac7"} Dec 16 13:01:10 crc kubenswrapper[4805]: I1216 13:01:10.249436 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41131c2efd119c626504dcfd6be636324184060a5a928e65ec4a9f595bf6eac7" Dec 16 13:01:10 crc kubenswrapper[4805]: I1216 13:01:10.249482 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29431501-jlgq8" Dec 16 13:01:21 crc kubenswrapper[4805]: I1216 13:01:21.524130 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 13:01:21 crc kubenswrapper[4805]: E1216 13:01:21.526730 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:01:34 crc kubenswrapper[4805]: I1216 13:01:34.522887 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 13:01:35 crc kubenswrapper[4805]: I1216 13:01:35.493783 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"2bc9f7466a5fc1734dc99434d0dca19bdc38293e1a7c9ba5046bbe3d52dad8c8"} Dec 16 13:03:57 crc kubenswrapper[4805]: I1216 13:03:57.071424 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:03:57 crc kubenswrapper[4805]: I1216 13:03:57.072046 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:04:27 crc kubenswrapper[4805]: I1216 13:04:27.071415 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:04:27 crc kubenswrapper[4805]: I1216 13:04:27.071949 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:04:34 crc kubenswrapper[4805]: I1216 13:04:34.741849 4805 trace.go:236] Trace[1038102915]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-tpb8r" (16-Dec-2025 13:04:27.309) (total time: 7430ms): Dec 16 13:04:34 crc kubenswrapper[4805]: Trace[1038102915]: [7.430622776s] [7.430622776s] END Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.603760 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-shdpq"] Dec 16 13:04:37 crc kubenswrapper[4805]: E1216 13:04:37.606476 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad6bcdb-7a23-48fa-b79c-69932357cf9f" containerName="keystone-cron" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.606505 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad6bcdb-7a23-48fa-b79c-69932357cf9f" containerName="keystone-cron" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.607282 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad6bcdb-7a23-48fa-b79c-69932357cf9f" containerName="keystone-cron" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.611169 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.653749 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shdpq"] Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.660511 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-utilities\") pod \"redhat-operators-shdpq\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.660623 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntxx8\" (UniqueName: \"kubernetes.io/projected/c151cf78-c568-40c0-8b87-35ae0b8e693f-kube-api-access-ntxx8\") pod \"redhat-operators-shdpq\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.660654 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-catalog-content\") pod \"redhat-operators-shdpq\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.762340 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-utilities\") pod \"redhat-operators-shdpq\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.762503 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntxx8\" (UniqueName: \"kubernetes.io/projected/c151cf78-c568-40c0-8b87-35ae0b8e693f-kube-api-access-ntxx8\") pod \"redhat-operators-shdpq\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.762568 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-catalog-content\") pod \"redhat-operators-shdpq\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.762842 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-utilities\") pod \"redhat-operators-shdpq\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.763465 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-catalog-content\") pod \"redhat-operators-shdpq\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.785638 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntxx8\" (UniqueName: \"kubernetes.io/projected/c151cf78-c568-40c0-8b87-35ae0b8e693f-kube-api-access-ntxx8\") pod \"redhat-operators-shdpq\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:37 crc kubenswrapper[4805]: I1216 13:04:37.959474 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:38 crc kubenswrapper[4805]: I1216 13:04:38.566082 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shdpq"] Dec 16 13:04:39 crc kubenswrapper[4805]: I1216 13:04:39.350625 4805 generic.go:334] "Generic (PLEG): container finished" podID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerID="c55febb630025c97f2af99dec0160e7aef829b4885159003034850983cb9f3bb" exitCode=0 Dec 16 13:04:39 crc kubenswrapper[4805]: I1216 13:04:39.350957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shdpq" event={"ID":"c151cf78-c568-40c0-8b87-35ae0b8e693f","Type":"ContainerDied","Data":"c55febb630025c97f2af99dec0160e7aef829b4885159003034850983cb9f3bb"} Dec 16 13:04:39 crc kubenswrapper[4805]: I1216 13:04:39.351013 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shdpq" event={"ID":"c151cf78-c568-40c0-8b87-35ae0b8e693f","Type":"ContainerStarted","Data":"3c7d3850f874030a08a036f605a3d0cdaa3a0cc4380b2f2e268140ff54f6a85a"} Dec 16 13:04:39 crc kubenswrapper[4805]: I1216 13:04:39.355703 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 13:04:41 crc kubenswrapper[4805]: I1216 13:04:41.422768 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shdpq" event={"ID":"c151cf78-c568-40c0-8b87-35ae0b8e693f","Type":"ContainerStarted","Data":"6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e"} Dec 16 13:04:44 crc kubenswrapper[4805]: I1216 13:04:44.457509 4805 generic.go:334] "Generic (PLEG): container finished" podID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerID="6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e" exitCode=0 Dec 16 13:04:44 crc kubenswrapper[4805]: I1216 13:04:44.457741 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shdpq" event={"ID":"c151cf78-c568-40c0-8b87-35ae0b8e693f","Type":"ContainerDied","Data":"6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e"} Dec 16 13:04:45 crc kubenswrapper[4805]: I1216 13:04:45.474776 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shdpq" event={"ID":"c151cf78-c568-40c0-8b87-35ae0b8e693f","Type":"ContainerStarted","Data":"f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a"} Dec 16 13:04:45 crc kubenswrapper[4805]: I1216 13:04:45.505696 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-shdpq" podStartSLOduration=2.773604079 podStartE2EDuration="8.505664513s" podCreationTimestamp="2025-12-16 13:04:37 +0000 UTC" firstStartedPulling="2025-12-16 13:04:39.35531181 +0000 UTC m=+4153.073569615" lastFinishedPulling="2025-12-16 13:04:45.087372234 +0000 UTC m=+4158.805630049" observedRunningTime="2025-12-16 13:04:45.497900622 +0000 UTC m=+4159.216158437" watchObservedRunningTime="2025-12-16 13:04:45.505664513 +0000 UTC m=+4159.223922328" Dec 16 13:04:47 crc kubenswrapper[4805]: I1216 13:04:47.960589 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:47 crc kubenswrapper[4805]: I1216 13:04:47.962373 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:49 crc kubenswrapper[4805]: I1216 13:04:49.007049 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-shdpq" podUID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerName="registry-server" probeResult="failure" output=< Dec 16 13:04:49 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 13:04:49 crc kubenswrapper[4805]: > Dec 16 13:04:57 crc kubenswrapper[4805]: I1216 13:04:57.071533 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:04:57 crc kubenswrapper[4805]: I1216 13:04:57.072097 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:04:57 crc kubenswrapper[4805]: I1216 13:04:57.072175 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 13:04:57 crc kubenswrapper[4805]: I1216 13:04:57.073056 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bc9f7466a5fc1734dc99434d0dca19bdc38293e1a7c9ba5046bbe3d52dad8c8"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 13:04:57 crc kubenswrapper[4805]: I1216 13:04:57.073130 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://2bc9f7466a5fc1734dc99434d0dca19bdc38293e1a7c9ba5046bbe3d52dad8c8" gracePeriod=600 Dec 16 13:04:57 crc kubenswrapper[4805]: I1216 13:04:57.612665 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="2bc9f7466a5fc1734dc99434d0dca19bdc38293e1a7c9ba5046bbe3d52dad8c8" exitCode=0 Dec 16 13:04:57 crc kubenswrapper[4805]: I1216 13:04:57.612734 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"2bc9f7466a5fc1734dc99434d0dca19bdc38293e1a7c9ba5046bbe3d52dad8c8"} Dec 16 13:04:57 crc kubenswrapper[4805]: I1216 13:04:57.613291 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4"} Dec 16 13:04:57 crc kubenswrapper[4805]: I1216 13:04:57.613337 4805 scope.go:117] "RemoveContainer" containerID="d6a25eb42250388e8fac1d461b714808b5babc461567cc184dabe6c424a52f5d" Dec 16 13:04:58 crc kubenswrapper[4805]: I1216 13:04:58.010525 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:58 crc kubenswrapper[4805]: I1216 13:04:58.064668 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:04:58 crc kubenswrapper[4805]: I1216 13:04:58.266661 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shdpq"] Dec 16 13:04:59 crc kubenswrapper[4805]: I1216 13:04:59.633230 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-shdpq" podUID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerName="registry-server" containerID="cri-o://f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a" gracePeriod=2 Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.239086 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.377932 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-utilities\") pod \"c151cf78-c568-40c0-8b87-35ae0b8e693f\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.378087 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntxx8\" (UniqueName: \"kubernetes.io/projected/c151cf78-c568-40c0-8b87-35ae0b8e693f-kube-api-access-ntxx8\") pod \"c151cf78-c568-40c0-8b87-35ae0b8e693f\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.378218 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-catalog-content\") pod \"c151cf78-c568-40c0-8b87-35ae0b8e693f\" (UID: \"c151cf78-c568-40c0-8b87-35ae0b8e693f\") " Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.379188 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-utilities" (OuterVolumeSpecName: "utilities") pod "c151cf78-c568-40c0-8b87-35ae0b8e693f" (UID: "c151cf78-c568-40c0-8b87-35ae0b8e693f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.386370 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c151cf78-c568-40c0-8b87-35ae0b8e693f-kube-api-access-ntxx8" (OuterVolumeSpecName: "kube-api-access-ntxx8") pod "c151cf78-c568-40c0-8b87-35ae0b8e693f" (UID: "c151cf78-c568-40c0-8b87-35ae0b8e693f"). InnerVolumeSpecName "kube-api-access-ntxx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.481264 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.481301 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntxx8\" (UniqueName: \"kubernetes.io/projected/c151cf78-c568-40c0-8b87-35ae0b8e693f-kube-api-access-ntxx8\") on node \"crc\" DevicePath \"\"" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.508281 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c151cf78-c568-40c0-8b87-35ae0b8e693f" (UID: "c151cf78-c568-40c0-8b87-35ae0b8e693f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.583225 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c151cf78-c568-40c0-8b87-35ae0b8e693f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.644155 4805 generic.go:334] "Generic (PLEG): container finished" podID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerID="f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a" exitCode=0 Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.644198 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shdpq" event={"ID":"c151cf78-c568-40c0-8b87-35ae0b8e693f","Type":"ContainerDied","Data":"f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a"} Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.644227 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shdpq" event={"ID":"c151cf78-c568-40c0-8b87-35ae0b8e693f","Type":"ContainerDied","Data":"3c7d3850f874030a08a036f605a3d0cdaa3a0cc4380b2f2e268140ff54f6a85a"} Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.644245 4805 scope.go:117] "RemoveContainer" containerID="f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.644373 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shdpq" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.677251 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shdpq"] Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.679936 4805 scope.go:117] "RemoveContainer" containerID="6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.688469 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-shdpq"] Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.708027 4805 scope.go:117] "RemoveContainer" containerID="c55febb630025c97f2af99dec0160e7aef829b4885159003034850983cb9f3bb" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.750339 4805 scope.go:117] "RemoveContainer" containerID="f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a" Dec 16 13:05:00 crc kubenswrapper[4805]: E1216 13:05:00.750848 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a\": container with ID starting with f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a not found: ID does not exist" containerID="f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.750894 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a"} err="failed to get container status \"f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a\": rpc error: code = NotFound desc = could not find container \"f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a\": container with ID starting with f00146adc3cdfda5b5dc2b515956ab87f3858709656ce26d91195bb5a7f2ff9a not found: ID does not exist" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.750922 4805 scope.go:117] "RemoveContainer" containerID="6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e" Dec 16 13:05:00 crc kubenswrapper[4805]: E1216 13:05:00.751432 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e\": container with ID starting with 6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e not found: ID does not exist" containerID="6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.751460 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e"} err="failed to get container status \"6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e\": rpc error: code = NotFound desc = could not find container \"6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e\": container with ID starting with 6c995d91b65cfc6585f79c5905761a17c4aa64cf5de439ea1b29378f79bc5c7e not found: ID does not exist" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.751482 4805 scope.go:117] "RemoveContainer" containerID="c55febb630025c97f2af99dec0160e7aef829b4885159003034850983cb9f3bb" Dec 16 13:05:00 crc kubenswrapper[4805]: E1216 13:05:00.751821 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55febb630025c97f2af99dec0160e7aef829b4885159003034850983cb9f3bb\": container with ID starting with c55febb630025c97f2af99dec0160e7aef829b4885159003034850983cb9f3bb not found: ID does not exist" containerID="c55febb630025c97f2af99dec0160e7aef829b4885159003034850983cb9f3bb" Dec 16 13:05:00 crc kubenswrapper[4805]: I1216 13:05:00.751856 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55febb630025c97f2af99dec0160e7aef829b4885159003034850983cb9f3bb"} err="failed to get container status \"c55febb630025c97f2af99dec0160e7aef829b4885159003034850983cb9f3bb\": rpc error: code = NotFound desc = could not find container \"c55febb630025c97f2af99dec0160e7aef829b4885159003034850983cb9f3bb\": container with ID starting with c55febb630025c97f2af99dec0160e7aef829b4885159003034850983cb9f3bb not found: ID does not exist" Dec 16 13:05:02 crc kubenswrapper[4805]: I1216 13:05:02.535402 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c151cf78-c568-40c0-8b87-35ae0b8e693f" path="/var/lib/kubelet/pods/c151cf78-c568-40c0-8b87-35ae0b8e693f/volumes" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.508286 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bbzgc"] Dec 16 13:06:23 crc kubenswrapper[4805]: E1216 13:06:23.509065 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerName="registry-server" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.509078 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerName="registry-server" Dec 16 13:06:23 crc kubenswrapper[4805]: E1216 13:06:23.509091 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerName="extract-content" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.509097 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerName="extract-content" Dec 16 13:06:23 crc kubenswrapper[4805]: E1216 13:06:23.509106 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerName="extract-utilities" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.509112 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerName="extract-utilities" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.509342 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c151cf78-c568-40c0-8b87-35ae0b8e693f" containerName="registry-server" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.510947 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.522530 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbzgc"] Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.638486 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7s97\" (UniqueName: \"kubernetes.io/projected/5b268c89-302a-4e50-a37a-14616c52c6a1-kube-api-access-x7s97\") pod \"certified-operators-bbzgc\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.638585 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-catalog-content\") pod \"certified-operators-bbzgc\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.639313 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-utilities\") pod \"certified-operators-bbzgc\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.740851 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-catalog-content\") pod \"certified-operators-bbzgc\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.740984 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-utilities\") pod \"certified-operators-bbzgc\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.741098 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7s97\" (UniqueName: \"kubernetes.io/projected/5b268c89-302a-4e50-a37a-14616c52c6a1-kube-api-access-x7s97\") pod \"certified-operators-bbzgc\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.741914 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-catalog-content\") pod \"certified-operators-bbzgc\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.742724 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-utilities\") pod \"certified-operators-bbzgc\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.783312 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7s97\" (UniqueName: \"kubernetes.io/projected/5b268c89-302a-4e50-a37a-14616c52c6a1-kube-api-access-x7s97\") pod \"certified-operators-bbzgc\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:23 crc kubenswrapper[4805]: I1216 13:06:23.830754 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:24 crc kubenswrapper[4805]: I1216 13:06:24.457892 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbzgc"] Dec 16 13:06:25 crc kubenswrapper[4805]: I1216 13:06:25.481889 4805 generic.go:334] "Generic (PLEG): container finished" podID="5b268c89-302a-4e50-a37a-14616c52c6a1" containerID="ec5450c3ef2aa37a793e355f2047f2f161a6c5492b28da61bfc04bc1ee10fc89" exitCode=0 Dec 16 13:06:25 crc kubenswrapper[4805]: I1216 13:06:25.481953 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbzgc" event={"ID":"5b268c89-302a-4e50-a37a-14616c52c6a1","Type":"ContainerDied","Data":"ec5450c3ef2aa37a793e355f2047f2f161a6c5492b28da61bfc04bc1ee10fc89"} Dec 16 13:06:25 crc kubenswrapper[4805]: I1216 13:06:25.482101 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbzgc" event={"ID":"5b268c89-302a-4e50-a37a-14616c52c6a1","Type":"ContainerStarted","Data":"924071d3a68d7f4341d04a6b76c01000b8216931a794d6ba4406f1483520a7cb"} Dec 16 13:06:27 crc kubenswrapper[4805]: I1216 13:06:27.505278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbzgc" event={"ID":"5b268c89-302a-4e50-a37a-14616c52c6a1","Type":"ContainerStarted","Data":"e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7"} Dec 16 13:06:28 crc kubenswrapper[4805]: I1216 13:06:28.516890 4805 generic.go:334] "Generic (PLEG): container finished" podID="5b268c89-302a-4e50-a37a-14616c52c6a1" containerID="e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7" exitCode=0 Dec 16 13:06:28 crc kubenswrapper[4805]: I1216 13:06:28.516978 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbzgc" event={"ID":"5b268c89-302a-4e50-a37a-14616c52c6a1","Type":"ContainerDied","Data":"e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7"} Dec 16 13:06:29 crc kubenswrapper[4805]: I1216 13:06:29.530186 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbzgc" event={"ID":"5b268c89-302a-4e50-a37a-14616c52c6a1","Type":"ContainerStarted","Data":"1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b"} Dec 16 13:06:29 crc kubenswrapper[4805]: I1216 13:06:29.554919 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bbzgc" podStartSLOduration=2.898161509 podStartE2EDuration="6.554886939s" podCreationTimestamp="2025-12-16 13:06:23 +0000 UTC" firstStartedPulling="2025-12-16 13:06:25.483764291 +0000 UTC m=+4259.202022096" lastFinishedPulling="2025-12-16 13:06:29.140489721 +0000 UTC m=+4262.858747526" observedRunningTime="2025-12-16 13:06:29.554485958 +0000 UTC m=+4263.272743783" watchObservedRunningTime="2025-12-16 13:06:29.554886939 +0000 UTC m=+4263.273144754" Dec 16 13:06:33 crc kubenswrapper[4805]: I1216 13:06:33.831462 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:33 crc kubenswrapper[4805]: I1216 13:06:33.832165 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:33 crc kubenswrapper[4805]: I1216 13:06:33.884036 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:34 crc kubenswrapper[4805]: I1216 13:06:34.727778 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:34 crc kubenswrapper[4805]: I1216 13:06:34.789862 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbzgc"] Dec 16 13:06:36 crc kubenswrapper[4805]: I1216 13:06:36.684033 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bbzgc" podUID="5b268c89-302a-4e50-a37a-14616c52c6a1" containerName="registry-server" containerID="cri-o://1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b" gracePeriod=2 Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.217521 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.247042 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-catalog-content\") pod \"5b268c89-302a-4e50-a37a-14616c52c6a1\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.247099 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7s97\" (UniqueName: \"kubernetes.io/projected/5b268c89-302a-4e50-a37a-14616c52c6a1-kube-api-access-x7s97\") pod \"5b268c89-302a-4e50-a37a-14616c52c6a1\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.247155 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-utilities\") pod \"5b268c89-302a-4e50-a37a-14616c52c6a1\" (UID: \"5b268c89-302a-4e50-a37a-14616c52c6a1\") " Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.247987 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-utilities" (OuterVolumeSpecName: "utilities") pod "5b268c89-302a-4e50-a37a-14616c52c6a1" (UID: "5b268c89-302a-4e50-a37a-14616c52c6a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.253513 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b268c89-302a-4e50-a37a-14616c52c6a1-kube-api-access-x7s97" (OuterVolumeSpecName: "kube-api-access-x7s97") pod "5b268c89-302a-4e50-a37a-14616c52c6a1" (UID: "5b268c89-302a-4e50-a37a-14616c52c6a1"). InnerVolumeSpecName "kube-api-access-x7s97". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.335003 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b268c89-302a-4e50-a37a-14616c52c6a1" (UID: "5b268c89-302a-4e50-a37a-14616c52c6a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.350254 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.350293 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7s97\" (UniqueName: \"kubernetes.io/projected/5b268c89-302a-4e50-a37a-14616c52c6a1-kube-api-access-x7s97\") on node \"crc\" DevicePath \"\"" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.350305 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b268c89-302a-4e50-a37a-14616c52c6a1-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.765474 4805 generic.go:334] "Generic (PLEG): container finished" podID="5b268c89-302a-4e50-a37a-14616c52c6a1" containerID="1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b" exitCode=0 Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.765519 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbzgc" event={"ID":"5b268c89-302a-4e50-a37a-14616c52c6a1","Type":"ContainerDied","Data":"1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b"} Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.765552 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbzgc" event={"ID":"5b268c89-302a-4e50-a37a-14616c52c6a1","Type":"ContainerDied","Data":"924071d3a68d7f4341d04a6b76c01000b8216931a794d6ba4406f1483520a7cb"} Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.765571 4805 scope.go:117] "RemoveContainer" containerID="1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.765622 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbzgc" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.795986 4805 scope.go:117] "RemoveContainer" containerID="e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.825118 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbzgc"] Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.825308 4805 scope.go:117] "RemoveContainer" containerID="ec5450c3ef2aa37a793e355f2047f2f161a6c5492b28da61bfc04bc1ee10fc89" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.826809 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bbzgc"] Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.884955 4805 scope.go:117] "RemoveContainer" containerID="1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b" Dec 16 13:06:37 crc kubenswrapper[4805]: E1216 13:06:37.888011 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b\": container with ID starting with 1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b not found: ID does not exist" containerID="1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.888065 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b"} err="failed to get container status \"1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b\": rpc error: code = NotFound desc = could not find container \"1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b\": container with ID starting with 1fdb02172b28b9cfd39ef156909660e13d3759cfe200ebac15441e97a8e2a91b not found: ID does not exist" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.888117 4805 scope.go:117] "RemoveContainer" containerID="e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7" Dec 16 13:06:37 crc kubenswrapper[4805]: E1216 13:06:37.889002 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7\": container with ID starting with e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7 not found: ID does not exist" containerID="e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.889031 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7"} err="failed to get container status \"e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7\": rpc error: code = NotFound desc = could not find container \"e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7\": container with ID starting with e4cca996cf8a4b782e72b12807e834b2ffb7261fcbe03917b55f5357f33e45f7 not found: ID does not exist" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.889059 4805 scope.go:117] "RemoveContainer" containerID="ec5450c3ef2aa37a793e355f2047f2f161a6c5492b28da61bfc04bc1ee10fc89" Dec 16 13:06:37 crc kubenswrapper[4805]: E1216 13:06:37.889758 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5450c3ef2aa37a793e355f2047f2f161a6c5492b28da61bfc04bc1ee10fc89\": container with ID starting with ec5450c3ef2aa37a793e355f2047f2f161a6c5492b28da61bfc04bc1ee10fc89 not found: ID does not exist" containerID="ec5450c3ef2aa37a793e355f2047f2f161a6c5492b28da61bfc04bc1ee10fc89" Dec 16 13:06:37 crc kubenswrapper[4805]: I1216 13:06:37.889784 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5450c3ef2aa37a793e355f2047f2f161a6c5492b28da61bfc04bc1ee10fc89"} err="failed to get container status \"ec5450c3ef2aa37a793e355f2047f2f161a6c5492b28da61bfc04bc1ee10fc89\": rpc error: code = NotFound desc = could not find container \"ec5450c3ef2aa37a793e355f2047f2f161a6c5492b28da61bfc04bc1ee10fc89\": container with ID starting with ec5450c3ef2aa37a793e355f2047f2f161a6c5492b28da61bfc04bc1ee10fc89 not found: ID does not exist" Dec 16 13:06:38 crc kubenswrapper[4805]: I1216 13:06:38.553523 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b268c89-302a-4e50-a37a-14616c52c6a1" path="/var/lib/kubelet/pods/5b268c89-302a-4e50-a37a-14616c52c6a1/volumes" Dec 16 13:06:57 crc kubenswrapper[4805]: I1216 13:06:57.071549 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:06:57 crc kubenswrapper[4805]: I1216 13:06:57.072109 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:07:06 crc kubenswrapper[4805]: I1216 13:07:06.940553 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gk5l2"] Dec 16 13:07:06 crc kubenswrapper[4805]: E1216 13:07:06.941460 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b268c89-302a-4e50-a37a-14616c52c6a1" containerName="extract-content" Dec 16 13:07:06 crc kubenswrapper[4805]: I1216 13:07:06.941476 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b268c89-302a-4e50-a37a-14616c52c6a1" containerName="extract-content" Dec 16 13:07:06 crc kubenswrapper[4805]: E1216 13:07:06.941505 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b268c89-302a-4e50-a37a-14616c52c6a1" containerName="extract-utilities" Dec 16 13:07:06 crc kubenswrapper[4805]: I1216 13:07:06.941513 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b268c89-302a-4e50-a37a-14616c52c6a1" containerName="extract-utilities" Dec 16 13:07:06 crc kubenswrapper[4805]: E1216 13:07:06.941541 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b268c89-302a-4e50-a37a-14616c52c6a1" containerName="registry-server" Dec 16 13:07:06 crc kubenswrapper[4805]: I1216 13:07:06.941548 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b268c89-302a-4e50-a37a-14616c52c6a1" containerName="registry-server" Dec 16 13:07:06 crc kubenswrapper[4805]: I1216 13:07:06.941766 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b268c89-302a-4e50-a37a-14616c52c6a1" containerName="registry-server" Dec 16 13:07:06 crc kubenswrapper[4805]: I1216 13:07:06.943286 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:06 crc kubenswrapper[4805]: I1216 13:07:06.961189 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pvs5\" (UniqueName: \"kubernetes.io/projected/87d2bb25-98e5-46ab-8946-849d51a5fa3d-kube-api-access-5pvs5\") pod \"community-operators-gk5l2\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:06 crc kubenswrapper[4805]: I1216 13:07:06.961283 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-utilities\") pod \"community-operators-gk5l2\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:06 crc kubenswrapper[4805]: I1216 13:07:06.961329 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-catalog-content\") pod \"community-operators-gk5l2\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:06 crc kubenswrapper[4805]: I1216 13:07:06.962290 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gk5l2"] Dec 16 13:07:07 crc kubenswrapper[4805]: I1216 13:07:07.063426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-catalog-content\") pod \"community-operators-gk5l2\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:07 crc kubenswrapper[4805]: I1216 13:07:07.063666 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pvs5\" (UniqueName: \"kubernetes.io/projected/87d2bb25-98e5-46ab-8946-849d51a5fa3d-kube-api-access-5pvs5\") pod \"community-operators-gk5l2\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:07 crc kubenswrapper[4805]: I1216 13:07:07.063725 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-utilities\") pod \"community-operators-gk5l2\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:07 crc kubenswrapper[4805]: I1216 13:07:07.063914 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-catalog-content\") pod \"community-operators-gk5l2\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:07 crc kubenswrapper[4805]: I1216 13:07:07.064219 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-utilities\") pod \"community-operators-gk5l2\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:07 crc kubenswrapper[4805]: I1216 13:07:07.084864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pvs5\" (UniqueName: \"kubernetes.io/projected/87d2bb25-98e5-46ab-8946-849d51a5fa3d-kube-api-access-5pvs5\") pod \"community-operators-gk5l2\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:07 crc kubenswrapper[4805]: I1216 13:07:07.271252 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:07 crc kubenswrapper[4805]: I1216 13:07:07.905904 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gk5l2"] Dec 16 13:07:07 crc kubenswrapper[4805]: W1216 13:07:07.914536 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87d2bb25_98e5_46ab_8946_849d51a5fa3d.slice/crio-eb1d78e99f75397b70dfd5bdb3b66d8394cfbec5f57023c72c3a51b7fbe1a9e1 WatchSource:0}: Error finding container eb1d78e99f75397b70dfd5bdb3b66d8394cfbec5f57023c72c3a51b7fbe1a9e1: Status 404 returned error can't find the container with id eb1d78e99f75397b70dfd5bdb3b66d8394cfbec5f57023c72c3a51b7fbe1a9e1 Dec 16 13:07:08 crc kubenswrapper[4805]: I1216 13:07:08.162348 4805 generic.go:334] "Generic (PLEG): container finished" podID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" containerID="38832be0206caa346205b20e24a175cb6434133d3c314a7d7901d028ce152d8a" exitCode=0 Dec 16 13:07:08 crc kubenswrapper[4805]: I1216 13:07:08.162514 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gk5l2" event={"ID":"87d2bb25-98e5-46ab-8946-849d51a5fa3d","Type":"ContainerDied","Data":"38832be0206caa346205b20e24a175cb6434133d3c314a7d7901d028ce152d8a"} Dec 16 13:07:08 crc kubenswrapper[4805]: I1216 13:07:08.162601 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gk5l2" event={"ID":"87d2bb25-98e5-46ab-8946-849d51a5fa3d","Type":"ContainerStarted","Data":"eb1d78e99f75397b70dfd5bdb3b66d8394cfbec5f57023c72c3a51b7fbe1a9e1"} Dec 16 13:07:08 crc kubenswrapper[4805]: I1216 13:07:08.930513 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jxl62"] Dec 16 13:07:08 crc kubenswrapper[4805]: I1216 13:07:08.933067 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:08 crc kubenswrapper[4805]: I1216 13:07:08.947146 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxl62"] Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.101130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8tt2\" (UniqueName: \"kubernetes.io/projected/12042370-646c-4863-8950-c722f787b91a-kube-api-access-d8tt2\") pod \"redhat-marketplace-jxl62\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.101523 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-catalog-content\") pod \"redhat-marketplace-jxl62\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.101737 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-utilities\") pod \"redhat-marketplace-jxl62\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.174454 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gk5l2" event={"ID":"87d2bb25-98e5-46ab-8946-849d51a5fa3d","Type":"ContainerStarted","Data":"079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6"} Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.204466 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-utilities\") pod \"redhat-marketplace-jxl62\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.204661 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8tt2\" (UniqueName: \"kubernetes.io/projected/12042370-646c-4863-8950-c722f787b91a-kube-api-access-d8tt2\") pod \"redhat-marketplace-jxl62\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.204753 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-catalog-content\") pod \"redhat-marketplace-jxl62\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.205108 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-utilities\") pod \"redhat-marketplace-jxl62\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.205274 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-catalog-content\") pod \"redhat-marketplace-jxl62\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.270686 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8tt2\" (UniqueName: \"kubernetes.io/projected/12042370-646c-4863-8950-c722f787b91a-kube-api-access-d8tt2\") pod \"redhat-marketplace-jxl62\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.287689 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:09 crc kubenswrapper[4805]: I1216 13:07:09.954008 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxl62"] Dec 16 13:07:09 crc kubenswrapper[4805]: W1216 13:07:09.962650 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12042370_646c_4863_8950_c722f787b91a.slice/crio-f92fca432cf2883e5aec24358b592ce6935860731f3ff19036bcd6d142d31e12 WatchSource:0}: Error finding container f92fca432cf2883e5aec24358b592ce6935860731f3ff19036bcd6d142d31e12: Status 404 returned error can't find the container with id f92fca432cf2883e5aec24358b592ce6935860731f3ff19036bcd6d142d31e12 Dec 16 13:07:10 crc kubenswrapper[4805]: I1216 13:07:10.193675 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxl62" event={"ID":"12042370-646c-4863-8950-c722f787b91a","Type":"ContainerStarted","Data":"f92fca432cf2883e5aec24358b592ce6935860731f3ff19036bcd6d142d31e12"} Dec 16 13:07:11 crc kubenswrapper[4805]: I1216 13:07:11.204380 4805 generic.go:334] "Generic (PLEG): container finished" podID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" containerID="079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6" exitCode=0 Dec 16 13:07:11 crc kubenswrapper[4805]: I1216 13:07:11.204420 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gk5l2" event={"ID":"87d2bb25-98e5-46ab-8946-849d51a5fa3d","Type":"ContainerDied","Data":"079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6"} Dec 16 13:07:11 crc kubenswrapper[4805]: I1216 13:07:11.209999 4805 generic.go:334] "Generic (PLEG): container finished" podID="12042370-646c-4863-8950-c722f787b91a" containerID="d0f0599460c2fa1cbcccfdc87934ff4b7d902afc8e7e73f1c99812618929bff7" exitCode=0 Dec 16 13:07:11 crc kubenswrapper[4805]: I1216 13:07:11.210053 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxl62" event={"ID":"12042370-646c-4863-8950-c722f787b91a","Type":"ContainerDied","Data":"d0f0599460c2fa1cbcccfdc87934ff4b7d902afc8e7e73f1c99812618929bff7"} Dec 16 13:07:12 crc kubenswrapper[4805]: I1216 13:07:12.221660 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxl62" event={"ID":"12042370-646c-4863-8950-c722f787b91a","Type":"ContainerStarted","Data":"f3ebe24b4163365f16cc2ef6e979e7bfdf97fa6e179fe6f8608aca97d63fbf69"} Dec 16 13:07:12 crc kubenswrapper[4805]: I1216 13:07:12.225020 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gk5l2" event={"ID":"87d2bb25-98e5-46ab-8946-849d51a5fa3d","Type":"ContainerStarted","Data":"93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10"} Dec 16 13:07:12 crc kubenswrapper[4805]: I1216 13:07:12.279267 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gk5l2" podStartSLOduration=2.826108454 podStartE2EDuration="6.279246079s" podCreationTimestamp="2025-12-16 13:07:06 +0000 UTC" firstStartedPulling="2025-12-16 13:07:08.164444377 +0000 UTC m=+4301.882702182" lastFinishedPulling="2025-12-16 13:07:11.617582002 +0000 UTC m=+4305.335839807" observedRunningTime="2025-12-16 13:07:12.277106928 +0000 UTC m=+4305.995364753" watchObservedRunningTime="2025-12-16 13:07:12.279246079 +0000 UTC m=+4305.997503894" Dec 16 13:07:13 crc kubenswrapper[4805]: I1216 13:07:13.237009 4805 generic.go:334] "Generic (PLEG): container finished" podID="12042370-646c-4863-8950-c722f787b91a" containerID="f3ebe24b4163365f16cc2ef6e979e7bfdf97fa6e179fe6f8608aca97d63fbf69" exitCode=0 Dec 16 13:07:13 crc kubenswrapper[4805]: I1216 13:07:13.237080 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxl62" event={"ID":"12042370-646c-4863-8950-c722f787b91a","Type":"ContainerDied","Data":"f3ebe24b4163365f16cc2ef6e979e7bfdf97fa6e179fe6f8608aca97d63fbf69"} Dec 16 13:07:15 crc kubenswrapper[4805]: I1216 13:07:15.265023 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxl62" event={"ID":"12042370-646c-4863-8950-c722f787b91a","Type":"ContainerStarted","Data":"84bc7c408b875cbe18a9bd6b193708b030c21ccf8243e062b6b380f4c2571edc"} Dec 16 13:07:15 crc kubenswrapper[4805]: I1216 13:07:15.294328 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jxl62" podStartSLOduration=4.571187969 podStartE2EDuration="7.294303414s" podCreationTimestamp="2025-12-16 13:07:08 +0000 UTC" firstStartedPulling="2025-12-16 13:07:11.211796755 +0000 UTC m=+4304.930054560" lastFinishedPulling="2025-12-16 13:07:13.9349122 +0000 UTC m=+4307.653170005" observedRunningTime="2025-12-16 13:07:15.28471373 +0000 UTC m=+4309.002971535" watchObservedRunningTime="2025-12-16 13:07:15.294303414 +0000 UTC m=+4309.012561229" Dec 16 13:07:17 crc kubenswrapper[4805]: I1216 13:07:17.273007 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:17 crc kubenswrapper[4805]: I1216 13:07:17.273651 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:17 crc kubenswrapper[4805]: I1216 13:07:17.336665 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:17 crc kubenswrapper[4805]: I1216 13:07:17.382449 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:18 crc kubenswrapper[4805]: I1216 13:07:18.122854 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gk5l2"] Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.288437 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.288499 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.321210 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gk5l2" podUID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" containerName="registry-server" containerID="cri-o://93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10" gracePeriod=2 Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.345925 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.397175 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.828993 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.932118 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-utilities\") pod \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.932217 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-catalog-content\") pod \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.932276 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pvs5\" (UniqueName: \"kubernetes.io/projected/87d2bb25-98e5-46ab-8946-849d51a5fa3d-kube-api-access-5pvs5\") pod \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\" (UID: \"87d2bb25-98e5-46ab-8946-849d51a5fa3d\") " Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.934043 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-utilities" (OuterVolumeSpecName: "utilities") pod "87d2bb25-98e5-46ab-8946-849d51a5fa3d" (UID: "87d2bb25-98e5-46ab-8946-849d51a5fa3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.945875 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d2bb25-98e5-46ab-8946-849d51a5fa3d-kube-api-access-5pvs5" (OuterVolumeSpecName: "kube-api-access-5pvs5") pod "87d2bb25-98e5-46ab-8946-849d51a5fa3d" (UID: "87d2bb25-98e5-46ab-8946-849d51a5fa3d"). InnerVolumeSpecName "kube-api-access-5pvs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:07:19 crc kubenswrapper[4805]: I1216 13:07:19.994697 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87d2bb25-98e5-46ab-8946-849d51a5fa3d" (UID: "87d2bb25-98e5-46ab-8946-849d51a5fa3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.034494 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.034525 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d2bb25-98e5-46ab-8946-849d51a5fa3d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.034536 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pvs5\" (UniqueName: \"kubernetes.io/projected/87d2bb25-98e5-46ab-8946-849d51a5fa3d-kube-api-access-5pvs5\") on node \"crc\" DevicePath \"\"" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.333281 4805 generic.go:334] "Generic (PLEG): container finished" podID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" containerID="93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10" exitCode=0 Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.333375 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gk5l2" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.333379 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gk5l2" event={"ID":"87d2bb25-98e5-46ab-8946-849d51a5fa3d","Type":"ContainerDied","Data":"93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10"} Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.333448 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gk5l2" event={"ID":"87d2bb25-98e5-46ab-8946-849d51a5fa3d","Type":"ContainerDied","Data":"eb1d78e99f75397b70dfd5bdb3b66d8394cfbec5f57023c72c3a51b7fbe1a9e1"} Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.333474 4805 scope.go:117] "RemoveContainer" containerID="93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.386391 4805 scope.go:117] "RemoveContainer" containerID="079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.388356 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gk5l2"] Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.412686 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gk5l2"] Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.425689 4805 scope.go:117] "RemoveContainer" containerID="38832be0206caa346205b20e24a175cb6434133d3c314a7d7901d028ce152d8a" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.462710 4805 scope.go:117] "RemoveContainer" containerID="93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10" Dec 16 13:07:20 crc kubenswrapper[4805]: E1216 13:07:20.465070 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10\": container with ID starting with 93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10 not found: ID does not exist" containerID="93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.465152 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10"} err="failed to get container status \"93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10\": rpc error: code = NotFound desc = could not find container \"93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10\": container with ID starting with 93436f4484a8776d2ccc294d4c0233e554e9f58510af3fd6f89acd86f7921c10 not found: ID does not exist" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.465186 4805 scope.go:117] "RemoveContainer" containerID="079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6" Dec 16 13:07:20 crc kubenswrapper[4805]: E1216 13:07:20.465982 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6\": container with ID starting with 079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6 not found: ID does not exist" containerID="079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.466023 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6"} err="failed to get container status \"079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6\": rpc error: code = NotFound desc = could not find container \"079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6\": container with ID starting with 079fb3d06958b630a069537718d1dd10e23fb8450841e1e12d1027fec023a4c6 not found: ID does not exist" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.466051 4805 scope.go:117] "RemoveContainer" containerID="38832be0206caa346205b20e24a175cb6434133d3c314a7d7901d028ce152d8a" Dec 16 13:07:20 crc kubenswrapper[4805]: E1216 13:07:20.466697 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38832be0206caa346205b20e24a175cb6434133d3c314a7d7901d028ce152d8a\": container with ID starting with 38832be0206caa346205b20e24a175cb6434133d3c314a7d7901d028ce152d8a not found: ID does not exist" containerID="38832be0206caa346205b20e24a175cb6434133d3c314a7d7901d028ce152d8a" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.466736 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38832be0206caa346205b20e24a175cb6434133d3c314a7d7901d028ce152d8a"} err="failed to get container status \"38832be0206caa346205b20e24a175cb6434133d3c314a7d7901d028ce152d8a\": rpc error: code = NotFound desc = could not find container \"38832be0206caa346205b20e24a175cb6434133d3c314a7d7901d028ce152d8a\": container with ID starting with 38832be0206caa346205b20e24a175cb6434133d3c314a7d7901d028ce152d8a not found: ID does not exist" Dec 16 13:07:20 crc kubenswrapper[4805]: I1216 13:07:20.538545 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" path="/var/lib/kubelet/pods/87d2bb25-98e5-46ab-8946-849d51a5fa3d/volumes" Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.121086 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxl62"] Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.122879 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jxl62" podUID="12042370-646c-4863-8950-c722f787b91a" containerName="registry-server" containerID="cri-o://84bc7c408b875cbe18a9bd6b193708b030c21ccf8243e062b6b380f4c2571edc" gracePeriod=2 Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.380570 4805 generic.go:334] "Generic (PLEG): container finished" podID="12042370-646c-4863-8950-c722f787b91a" containerID="84bc7c408b875cbe18a9bd6b193708b030c21ccf8243e062b6b380f4c2571edc" exitCode=0 Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.380618 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxl62" event={"ID":"12042370-646c-4863-8950-c722f787b91a","Type":"ContainerDied","Data":"84bc7c408b875cbe18a9bd6b193708b030c21ccf8243e062b6b380f4c2571edc"} Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.760207 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.844094 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-catalog-content\") pod \"12042370-646c-4863-8950-c722f787b91a\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.844265 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-utilities\") pod \"12042370-646c-4863-8950-c722f787b91a\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.844318 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8tt2\" (UniqueName: \"kubernetes.io/projected/12042370-646c-4863-8950-c722f787b91a-kube-api-access-d8tt2\") pod \"12042370-646c-4863-8950-c722f787b91a\" (UID: \"12042370-646c-4863-8950-c722f787b91a\") " Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.845923 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-utilities" (OuterVolumeSpecName: "utilities") pod "12042370-646c-4863-8950-c722f787b91a" (UID: "12042370-646c-4863-8950-c722f787b91a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.855396 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12042370-646c-4863-8950-c722f787b91a-kube-api-access-d8tt2" (OuterVolumeSpecName: "kube-api-access-d8tt2") pod "12042370-646c-4863-8950-c722f787b91a" (UID: "12042370-646c-4863-8950-c722f787b91a"). InnerVolumeSpecName "kube-api-access-d8tt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.874732 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12042370-646c-4863-8950-c722f787b91a" (UID: "12042370-646c-4863-8950-c722f787b91a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.946274 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.946307 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12042370-646c-4863-8950-c722f787b91a-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:07:24 crc kubenswrapper[4805]: I1216 13:07:24.946317 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8tt2\" (UniqueName: \"kubernetes.io/projected/12042370-646c-4863-8950-c722f787b91a-kube-api-access-d8tt2\") on node \"crc\" DevicePath \"\"" Dec 16 13:07:25 crc kubenswrapper[4805]: I1216 13:07:25.390651 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxl62" event={"ID":"12042370-646c-4863-8950-c722f787b91a","Type":"ContainerDied","Data":"f92fca432cf2883e5aec24358b592ce6935860731f3ff19036bcd6d142d31e12"} Dec 16 13:07:25 crc kubenswrapper[4805]: I1216 13:07:25.390721 4805 scope.go:117] "RemoveContainer" containerID="84bc7c408b875cbe18a9bd6b193708b030c21ccf8243e062b6b380f4c2571edc" Dec 16 13:07:25 crc kubenswrapper[4805]: I1216 13:07:25.390744 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxl62" Dec 16 13:07:25 crc kubenswrapper[4805]: I1216 13:07:25.413899 4805 scope.go:117] "RemoveContainer" containerID="f3ebe24b4163365f16cc2ef6e979e7bfdf97fa6e179fe6f8608aca97d63fbf69" Dec 16 13:07:25 crc kubenswrapper[4805]: I1216 13:07:25.435591 4805 scope.go:117] "RemoveContainer" containerID="d0f0599460c2fa1cbcccfdc87934ff4b7d902afc8e7e73f1c99812618929bff7" Dec 16 13:07:25 crc kubenswrapper[4805]: I1216 13:07:25.460705 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxl62"] Dec 16 13:07:25 crc kubenswrapper[4805]: I1216 13:07:25.472201 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxl62"] Dec 16 13:07:26 crc kubenswrapper[4805]: I1216 13:07:26.535236 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12042370-646c-4863-8950-c722f787b91a" path="/var/lib/kubelet/pods/12042370-646c-4863-8950-c722f787b91a/volumes" Dec 16 13:07:27 crc kubenswrapper[4805]: I1216 13:07:27.071512 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:07:27 crc kubenswrapper[4805]: I1216 13:07:27.071581 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:07:57 crc kubenswrapper[4805]: I1216 13:07:57.071756 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:07:57 crc kubenswrapper[4805]: I1216 13:07:57.072413 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:07:57 crc kubenswrapper[4805]: I1216 13:07:57.072465 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 13:07:57 crc kubenswrapper[4805]: I1216 13:07:57.073658 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 13:07:57 crc kubenswrapper[4805]: I1216 13:07:57.073761 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" gracePeriod=600 Dec 16 13:07:57 crc kubenswrapper[4805]: E1216 13:07:57.224864 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:07:57 crc kubenswrapper[4805]: I1216 13:07:57.698808 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" exitCode=0 Dec 16 13:07:57 crc kubenswrapper[4805]: I1216 13:07:57.698866 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4"} Dec 16 13:07:57 crc kubenswrapper[4805]: I1216 13:07:57.698910 4805 scope.go:117] "RemoveContainer" containerID="2bc9f7466a5fc1734dc99434d0dca19bdc38293e1a7c9ba5046bbe3d52dad8c8" Dec 16 13:07:57 crc kubenswrapper[4805]: I1216 13:07:57.701514 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:07:57 crc kubenswrapper[4805]: E1216 13:07:57.702266 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:08:12 crc kubenswrapper[4805]: I1216 13:08:12.522670 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:08:12 crc kubenswrapper[4805]: E1216 13:08:12.523593 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:08:25 crc kubenswrapper[4805]: I1216 13:08:25.523882 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:08:25 crc kubenswrapper[4805]: E1216 13:08:25.525017 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:08:39 crc kubenswrapper[4805]: I1216 13:08:39.523584 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:08:39 crc kubenswrapper[4805]: E1216 13:08:39.524291 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:08:54 crc kubenswrapper[4805]: I1216 13:08:54.522618 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:08:54 crc kubenswrapper[4805]: E1216 13:08:54.523321 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:09:05 crc kubenswrapper[4805]: I1216 13:09:05.522650 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:09:05 crc kubenswrapper[4805]: E1216 13:09:05.523505 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:09:18 crc kubenswrapper[4805]: I1216 13:09:18.523113 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:09:18 crc kubenswrapper[4805]: E1216 13:09:18.524876 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:09:31 crc kubenswrapper[4805]: I1216 13:09:31.522958 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:09:31 crc kubenswrapper[4805]: E1216 13:09:31.524462 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:09:46 crc kubenswrapper[4805]: I1216 13:09:46.530631 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:09:46 crc kubenswrapper[4805]: E1216 13:09:46.531417 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:09:49 crc kubenswrapper[4805]: I1216 13:09:49.726441 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-7c9ff8845d-swkll" podUID="bbd2ad8a-7239-4e25-bfbd-a009e826a337" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 16 13:09:59 crc kubenswrapper[4805]: I1216 13:09:59.522796 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:09:59 crc kubenswrapper[4805]: E1216 13:09:59.523979 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:10:11 crc kubenswrapper[4805]: I1216 13:10:11.525183 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:10:11 crc kubenswrapper[4805]: E1216 13:10:11.526197 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:10:26 crc kubenswrapper[4805]: I1216 13:10:26.528994 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:10:26 crc kubenswrapper[4805]: E1216 13:10:26.530739 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:10:41 crc kubenswrapper[4805]: I1216 13:10:41.523440 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:10:41 crc kubenswrapper[4805]: E1216 13:10:41.524394 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:10:56 crc kubenswrapper[4805]: I1216 13:10:56.536298 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:10:56 crc kubenswrapper[4805]: E1216 13:10:56.538446 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:11:09 crc kubenswrapper[4805]: I1216 13:11:09.522843 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:11:09 crc kubenswrapper[4805]: E1216 13:11:09.523646 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:11:21 crc kubenswrapper[4805]: I1216 13:11:21.522875 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:11:21 crc kubenswrapper[4805]: E1216 13:11:21.523802 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:11:35 crc kubenswrapper[4805]: I1216 13:11:35.522626 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:11:35 crc kubenswrapper[4805]: E1216 13:11:35.523407 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:11:47 crc kubenswrapper[4805]: I1216 13:11:47.522234 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:11:47 crc kubenswrapper[4805]: E1216 13:11:47.522954 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:12:02 crc kubenswrapper[4805]: I1216 13:12:02.527346 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:12:02 crc kubenswrapper[4805]: E1216 13:12:02.527923 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:12:15 crc kubenswrapper[4805]: I1216 13:12:15.522318 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:12:15 crc kubenswrapper[4805]: E1216 13:12:15.523023 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:12:26 crc kubenswrapper[4805]: I1216 13:12:26.554452 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:12:26 crc kubenswrapper[4805]: E1216 13:12:26.558951 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:12:41 crc kubenswrapper[4805]: I1216 13:12:41.523156 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:12:41 crc kubenswrapper[4805]: E1216 13:12:41.524073 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:12:53 crc kubenswrapper[4805]: I1216 13:12:53.523399 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:12:53 crc kubenswrapper[4805]: E1216 13:12:53.524239 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:13:05 crc kubenswrapper[4805]: I1216 13:13:05.523505 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:13:06 crc kubenswrapper[4805]: I1216 13:13:06.041578 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"f4fff8ab81a79c6040aa17493152ea9972a4d9da7b64f4dc0cd2a4c74183132a"} Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.160339 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw"] Dec 16 13:15:00 crc kubenswrapper[4805]: E1216 13:15:00.161533 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12042370-646c-4863-8950-c722f787b91a" containerName="extract-content" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.161557 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="12042370-646c-4863-8950-c722f787b91a" containerName="extract-content" Dec 16 13:15:00 crc kubenswrapper[4805]: E1216 13:15:00.161574 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" containerName="extract-utilities" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.161582 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" containerName="extract-utilities" Dec 16 13:15:00 crc kubenswrapper[4805]: E1216 13:15:00.161621 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" containerName="registry-server" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.161632 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" containerName="registry-server" Dec 16 13:15:00 crc kubenswrapper[4805]: E1216 13:15:00.161655 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" containerName="extract-content" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.161662 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" containerName="extract-content" Dec 16 13:15:00 crc kubenswrapper[4805]: E1216 13:15:00.161680 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12042370-646c-4863-8950-c722f787b91a" containerName="extract-utilities" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.161688 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="12042370-646c-4863-8950-c722f787b91a" containerName="extract-utilities" Dec 16 13:15:00 crc kubenswrapper[4805]: E1216 13:15:00.161698 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12042370-646c-4863-8950-c722f787b91a" containerName="registry-server" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.161705 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="12042370-646c-4863-8950-c722f787b91a" containerName="registry-server" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.161996 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="12042370-646c-4863-8950-c722f787b91a" containerName="registry-server" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.162017 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d2bb25-98e5-46ab-8946-849d51a5fa3d" containerName="registry-server" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.163539 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.166786 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.173033 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.186815 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw"] Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.309220 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/359f066f-14e5-4edf-949f-2266166c87aa-secret-volume\") pod \"collect-profiles-29431515-b8xqw\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.309965 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/359f066f-14e5-4edf-949f-2266166c87aa-config-volume\") pod \"collect-profiles-29431515-b8xqw\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.310055 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzkvd\" (UniqueName: \"kubernetes.io/projected/359f066f-14e5-4edf-949f-2266166c87aa-kube-api-access-fzkvd\") pod \"collect-profiles-29431515-b8xqw\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.412476 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/359f066f-14e5-4edf-949f-2266166c87aa-config-volume\") pod \"collect-profiles-29431515-b8xqw\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.412523 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzkvd\" (UniqueName: \"kubernetes.io/projected/359f066f-14e5-4edf-949f-2266166c87aa-kube-api-access-fzkvd\") pod \"collect-profiles-29431515-b8xqw\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.412631 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/359f066f-14e5-4edf-949f-2266166c87aa-secret-volume\") pod \"collect-profiles-29431515-b8xqw\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.413776 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/359f066f-14e5-4edf-949f-2266166c87aa-config-volume\") pod \"collect-profiles-29431515-b8xqw\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.425819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/359f066f-14e5-4edf-949f-2266166c87aa-secret-volume\") pod \"collect-profiles-29431515-b8xqw\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.430719 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzkvd\" (UniqueName: \"kubernetes.io/projected/359f066f-14e5-4edf-949f-2266166c87aa-kube-api-access-fzkvd\") pod \"collect-profiles-29431515-b8xqw\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:00 crc kubenswrapper[4805]: I1216 13:15:00.486102 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:01 crc kubenswrapper[4805]: I1216 13:15:01.036324 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw"] Dec 16 13:15:01 crc kubenswrapper[4805]: I1216 13:15:01.436055 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" event={"ID":"359f066f-14e5-4edf-949f-2266166c87aa","Type":"ContainerStarted","Data":"726bdb3cfaf8d77f2e1e18a074c72b59dbd32182d4932662e7e5f2e1302fc3f4"} Dec 16 13:15:01 crc kubenswrapper[4805]: I1216 13:15:01.436382 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" event={"ID":"359f066f-14e5-4edf-949f-2266166c87aa","Type":"ContainerStarted","Data":"710f4a03a5c23ea6dade0dff26904aeacf68af8af7ea59077450256f477be20b"} Dec 16 13:15:01 crc kubenswrapper[4805]: I1216 13:15:01.460769 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" podStartSLOduration=1.46073273 podStartE2EDuration="1.46073273s" podCreationTimestamp="2025-12-16 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:15:01.451221208 +0000 UTC m=+4775.169479033" watchObservedRunningTime="2025-12-16 13:15:01.46073273 +0000 UTC m=+4775.178990555" Dec 16 13:15:02 crc kubenswrapper[4805]: I1216 13:15:02.447839 4805 generic.go:334] "Generic (PLEG): container finished" podID="359f066f-14e5-4edf-949f-2266166c87aa" containerID="726bdb3cfaf8d77f2e1e18a074c72b59dbd32182d4932662e7e5f2e1302fc3f4" exitCode=0 Dec 16 13:15:02 crc kubenswrapper[4805]: I1216 13:15:02.447971 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" event={"ID":"359f066f-14e5-4edf-949f-2266166c87aa","Type":"ContainerDied","Data":"726bdb3cfaf8d77f2e1e18a074c72b59dbd32182d4932662e7e5f2e1302fc3f4"} Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.470825 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" event={"ID":"359f066f-14e5-4edf-949f-2266166c87aa","Type":"ContainerDied","Data":"710f4a03a5c23ea6dade0dff26904aeacf68af8af7ea59077450256f477be20b"} Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.471317 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="710f4a03a5c23ea6dade0dff26904aeacf68af8af7ea59077450256f477be20b" Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.538030 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.656454 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/359f066f-14e5-4edf-949f-2266166c87aa-config-volume\") pod \"359f066f-14e5-4edf-949f-2266166c87aa\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.656561 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzkvd\" (UniqueName: \"kubernetes.io/projected/359f066f-14e5-4edf-949f-2266166c87aa-kube-api-access-fzkvd\") pod \"359f066f-14e5-4edf-949f-2266166c87aa\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.656626 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/359f066f-14e5-4edf-949f-2266166c87aa-secret-volume\") pod \"359f066f-14e5-4edf-949f-2266166c87aa\" (UID: \"359f066f-14e5-4edf-949f-2266166c87aa\") " Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.657353 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/359f066f-14e5-4edf-949f-2266166c87aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "359f066f-14e5-4edf-949f-2266166c87aa" (UID: "359f066f-14e5-4edf-949f-2266166c87aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.657970 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/359f066f-14e5-4edf-949f-2266166c87aa-config-volume\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.664540 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359f066f-14e5-4edf-949f-2266166c87aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "359f066f-14e5-4edf-949f-2266166c87aa" (UID: "359f066f-14e5-4edf-949f-2266166c87aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.665177 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359f066f-14e5-4edf-949f-2266166c87aa-kube-api-access-fzkvd" (OuterVolumeSpecName: "kube-api-access-fzkvd") pod "359f066f-14e5-4edf-949f-2266166c87aa" (UID: "359f066f-14e5-4edf-949f-2266166c87aa"). InnerVolumeSpecName "kube-api-access-fzkvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.760268 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzkvd\" (UniqueName: \"kubernetes.io/projected/359f066f-14e5-4edf-949f-2266166c87aa-kube-api-access-fzkvd\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:04 crc kubenswrapper[4805]: I1216 13:15:04.760564 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/359f066f-14e5-4edf-949f-2266166c87aa-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:05 crc kubenswrapper[4805]: I1216 13:15:05.481163 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431515-b8xqw" Dec 16 13:15:05 crc kubenswrapper[4805]: I1216 13:15:05.645304 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv"] Dec 16 13:15:05 crc kubenswrapper[4805]: I1216 13:15:05.656970 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431470-n8kpv"] Dec 16 13:15:06 crc kubenswrapper[4805]: I1216 13:15:06.564233 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccfeac0e-ca88-4939-8ef7-f84513bc4eb7" path="/var/lib/kubelet/pods/ccfeac0e-ca88-4939-8ef7-f84513bc4eb7/volumes" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.309979 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hnr9j"] Dec 16 13:15:18 crc kubenswrapper[4805]: E1216 13:15:18.310950 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359f066f-14e5-4edf-949f-2266166c87aa" containerName="collect-profiles" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.310964 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="359f066f-14e5-4edf-949f-2266166c87aa" containerName="collect-profiles" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.311246 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="359f066f-14e5-4edf-949f-2266166c87aa" containerName="collect-profiles" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.312872 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.332636 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnr9j"] Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.432121 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-catalog-content\") pod \"redhat-operators-hnr9j\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.432642 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-utilities\") pod \"redhat-operators-hnr9j\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.432774 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prswg\" (UniqueName: \"kubernetes.io/projected/d70029f1-f430-4c7b-a7ea-c01b7bed3779-kube-api-access-prswg\") pod \"redhat-operators-hnr9j\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.534667 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-utilities\") pod \"redhat-operators-hnr9j\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.534735 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prswg\" (UniqueName: \"kubernetes.io/projected/d70029f1-f430-4c7b-a7ea-c01b7bed3779-kube-api-access-prswg\") pod \"redhat-operators-hnr9j\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.534794 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-catalog-content\") pod \"redhat-operators-hnr9j\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.535398 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-utilities\") pod \"redhat-operators-hnr9j\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.535407 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-catalog-content\") pod \"redhat-operators-hnr9j\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.559004 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prswg\" (UniqueName: \"kubernetes.io/projected/d70029f1-f430-4c7b-a7ea-c01b7bed3779-kube-api-access-prswg\") pod \"redhat-operators-hnr9j\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:18 crc kubenswrapper[4805]: I1216 13:15:18.637420 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:19 crc kubenswrapper[4805]: I1216 13:15:19.154715 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnr9j"] Dec 16 13:15:19 crc kubenswrapper[4805]: I1216 13:15:19.606981 4805 generic.go:334] "Generic (PLEG): container finished" podID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerID="8c12874fe0c737fd2a06b1f7006fb576bce7c13688dbb3a1bcf3eb0e46b4243a" exitCode=0 Dec 16 13:15:19 crc kubenswrapper[4805]: I1216 13:15:19.607316 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnr9j" event={"ID":"d70029f1-f430-4c7b-a7ea-c01b7bed3779","Type":"ContainerDied","Data":"8c12874fe0c737fd2a06b1f7006fb576bce7c13688dbb3a1bcf3eb0e46b4243a"} Dec 16 13:15:19 crc kubenswrapper[4805]: I1216 13:15:19.607354 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnr9j" event={"ID":"d70029f1-f430-4c7b-a7ea-c01b7bed3779","Type":"ContainerStarted","Data":"81560b8c3e2616fbf0c55c5ebc362fb81145c67274e0a6815a1c13c03c7a3a91"} Dec 16 13:15:19 crc kubenswrapper[4805]: I1216 13:15:19.609163 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 13:15:20 crc kubenswrapper[4805]: I1216 13:15:20.622300 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnr9j" event={"ID":"d70029f1-f430-4c7b-a7ea-c01b7bed3779","Type":"ContainerStarted","Data":"4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff"} Dec 16 13:15:26 crc kubenswrapper[4805]: I1216 13:15:26.685679 4805 generic.go:334] "Generic (PLEG): container finished" podID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerID="4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff" exitCode=0 Dec 16 13:15:26 crc kubenswrapper[4805]: I1216 13:15:26.685903 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnr9j" event={"ID":"d70029f1-f430-4c7b-a7ea-c01b7bed3779","Type":"ContainerDied","Data":"4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff"} Dec 16 13:15:27 crc kubenswrapper[4805]: I1216 13:15:27.072687 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:15:27 crc kubenswrapper[4805]: I1216 13:15:27.072787 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:15:27 crc kubenswrapper[4805]: I1216 13:15:27.704356 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnr9j" event={"ID":"d70029f1-f430-4c7b-a7ea-c01b7bed3779","Type":"ContainerStarted","Data":"95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952"} Dec 16 13:15:27 crc kubenswrapper[4805]: I1216 13:15:27.727373 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hnr9j" podStartSLOduration=2.030714279 podStartE2EDuration="9.727349579s" podCreationTimestamp="2025-12-16 13:15:18 +0000 UTC" firstStartedPulling="2025-12-16 13:15:19.608876442 +0000 UTC m=+4793.327134247" lastFinishedPulling="2025-12-16 13:15:27.305511742 +0000 UTC m=+4801.023769547" observedRunningTime="2025-12-16 13:15:27.724111406 +0000 UTC m=+4801.442369221" watchObservedRunningTime="2025-12-16 13:15:27.727349579 +0000 UTC m=+4801.445607394" Dec 16 13:15:28 crc kubenswrapper[4805]: I1216 13:15:28.638415 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:28 crc kubenswrapper[4805]: I1216 13:15:28.638744 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:29 crc kubenswrapper[4805]: I1216 13:15:29.692065 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hnr9j" podUID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerName="registry-server" probeResult="failure" output=< Dec 16 13:15:29 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 13:15:29 crc kubenswrapper[4805]: > Dec 16 13:15:38 crc kubenswrapper[4805]: I1216 13:15:38.694168 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:38 crc kubenswrapper[4805]: I1216 13:15:38.764807 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:38 crc kubenswrapper[4805]: I1216 13:15:38.814357 4805 generic.go:334] "Generic (PLEG): container finished" podID="96a2c3a4-408a-4437-9a22-bc7c41f87222" containerID="ad88aef882332705252a369420f41dc2ef2a393857548a02daeec35829767921" exitCode=0 Dec 16 13:15:38 crc kubenswrapper[4805]: I1216 13:15:38.814442 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"96a2c3a4-408a-4437-9a22-bc7c41f87222","Type":"ContainerDied","Data":"ad88aef882332705252a369420f41dc2ef2a393857548a02daeec35829767921"} Dec 16 13:15:38 crc kubenswrapper[4805]: I1216 13:15:38.934783 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hnr9j"] Dec 16 13:15:39 crc kubenswrapper[4805]: I1216 13:15:39.824677 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hnr9j" podUID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerName="registry-server" containerID="cri-o://95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952" gracePeriod=2 Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.284396 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.385050 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.390867 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ssh-key\") pod \"96a2c3a4-408a-4437-9a22-bc7c41f87222\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.391051 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnpzs\" (UniqueName: \"kubernetes.io/projected/96a2c3a4-408a-4437-9a22-bc7c41f87222-kube-api-access-vnpzs\") pod \"96a2c3a4-408a-4437-9a22-bc7c41f87222\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.391107 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config\") pod \"96a2c3a4-408a-4437-9a22-bc7c41f87222\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.391178 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-temporary\") pod \"96a2c3a4-408a-4437-9a22-bc7c41f87222\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.391264 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"96a2c3a4-408a-4437-9a22-bc7c41f87222\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.391297 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-workdir\") pod \"96a2c3a4-408a-4437-9a22-bc7c41f87222\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.391325 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config-secret\") pod \"96a2c3a4-408a-4437-9a22-bc7c41f87222\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.391365 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-config-data\") pod \"96a2c3a4-408a-4437-9a22-bc7c41f87222\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.391446 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ca-certs\") pod \"96a2c3a4-408a-4437-9a22-bc7c41f87222\" (UID: \"96a2c3a4-408a-4437-9a22-bc7c41f87222\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.392036 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "96a2c3a4-408a-4437-9a22-bc7c41f87222" (UID: "96a2c3a4-408a-4437-9a22-bc7c41f87222"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.394173 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-config-data" (OuterVolumeSpecName: "config-data") pod "96a2c3a4-408a-4437-9a22-bc7c41f87222" (UID: "96a2c3a4-408a-4437-9a22-bc7c41f87222"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.397535 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "96a2c3a4-408a-4437-9a22-bc7c41f87222" (UID: "96a2c3a4-408a-4437-9a22-bc7c41f87222"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.399049 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "96a2c3a4-408a-4437-9a22-bc7c41f87222" (UID: "96a2c3a4-408a-4437-9a22-bc7c41f87222"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.402007 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a2c3a4-408a-4437-9a22-bc7c41f87222-kube-api-access-vnpzs" (OuterVolumeSpecName: "kube-api-access-vnpzs") pod "96a2c3a4-408a-4437-9a22-bc7c41f87222" (UID: "96a2c3a4-408a-4437-9a22-bc7c41f87222"). InnerVolumeSpecName "kube-api-access-vnpzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.428180 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "96a2c3a4-408a-4437-9a22-bc7c41f87222" (UID: "96a2c3a4-408a-4437-9a22-bc7c41f87222"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.434264 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "96a2c3a4-408a-4437-9a22-bc7c41f87222" (UID: "96a2c3a4-408a-4437-9a22-bc7c41f87222"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.435532 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "96a2c3a4-408a-4437-9a22-bc7c41f87222" (UID: "96a2c3a4-408a-4437-9a22-bc7c41f87222"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.448622 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "96a2c3a4-408a-4437-9a22-bc7c41f87222" (UID: "96a2c3a4-408a-4437-9a22-bc7c41f87222"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.492733 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-catalog-content\") pod \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.492805 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-utilities\") pod \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.493036 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prswg\" (UniqueName: \"kubernetes.io/projected/d70029f1-f430-4c7b-a7ea-c01b7bed3779-kube-api-access-prswg\") pod \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\" (UID: \"d70029f1-f430-4c7b-a7ea-c01b7bed3779\") " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.493994 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-utilities" (OuterVolumeSpecName: "utilities") pod "d70029f1-f430-4c7b-a7ea-c01b7bed3779" (UID: "d70029f1-f430-4c7b-a7ea-c01b7bed3779"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.496589 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.496620 4805 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.496635 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.496649 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-config-data\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.496660 4805 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.496672 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96a2c3a4-408a-4437-9a22-bc7c41f87222-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.496683 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnpzs\" (UniqueName: \"kubernetes.io/projected/96a2c3a4-408a-4437-9a22-bc7c41f87222-kube-api-access-vnpzs\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.496696 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96a2c3a4-408a-4437-9a22-bc7c41f87222-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.496708 4805 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/96a2c3a4-408a-4437-9a22-bc7c41f87222-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.501313 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70029f1-f430-4c7b-a7ea-c01b7bed3779-kube-api-access-prswg" (OuterVolumeSpecName: "kube-api-access-prswg") pod "d70029f1-f430-4c7b-a7ea-c01b7bed3779" (UID: "d70029f1-f430-4c7b-a7ea-c01b7bed3779"). InnerVolumeSpecName "kube-api-access-prswg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.527510 4805 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.598703 4805 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.598895 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prswg\" (UniqueName: \"kubernetes.io/projected/d70029f1-f430-4c7b-a7ea-c01b7bed3779-kube-api-access-prswg\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.598908 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.609532 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d70029f1-f430-4c7b-a7ea-c01b7bed3779" (UID: "d70029f1-f430-4c7b-a7ea-c01b7bed3779"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.703446 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70029f1-f430-4c7b-a7ea-c01b7bed3779-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.835464 4805 generic.go:334] "Generic (PLEG): container finished" podID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerID="95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952" exitCode=0 Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.836457 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnr9j" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.837438 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnr9j" event={"ID":"d70029f1-f430-4c7b-a7ea-c01b7bed3779","Type":"ContainerDied","Data":"95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952"} Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.837495 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnr9j" event={"ID":"d70029f1-f430-4c7b-a7ea-c01b7bed3779","Type":"ContainerDied","Data":"81560b8c3e2616fbf0c55c5ebc362fb81145c67274e0a6815a1c13c03c7a3a91"} Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.837559 4805 scope.go:117] "RemoveContainer" containerID="95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.840470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"96a2c3a4-408a-4437-9a22-bc7c41f87222","Type":"ContainerDied","Data":"825f3ad74229c7e0549b8648a3c818de26ebe3e27f6677b9525f4f253e7ae2fd"} Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.840500 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="825f3ad74229c7e0549b8648a3c818de26ebe3e27f6677b9525f4f253e7ae2fd" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.840574 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.865296 4805 scope.go:117] "RemoveContainer" containerID="4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.910990 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hnr9j"] Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.911340 4805 scope.go:117] "RemoveContainer" containerID="8c12874fe0c737fd2a06b1f7006fb576bce7c13688dbb3a1bcf3eb0e46b4243a" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.933887 4805 scope.go:117] "RemoveContainer" containerID="95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952" Dec 16 13:15:40 crc kubenswrapper[4805]: E1216 13:15:40.934860 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952\": container with ID starting with 95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952 not found: ID does not exist" containerID="95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.934911 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952"} err="failed to get container status \"95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952\": rpc error: code = NotFound desc = could not find container \"95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952\": container with ID starting with 95699447b744f788792d0896adc82a012e021465b9b41033c815285c479ed952 not found: ID does not exist" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.934939 4805 scope.go:117] "RemoveContainer" containerID="4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.936892 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hnr9j"] Dec 16 13:15:40 crc kubenswrapper[4805]: E1216 13:15:40.942918 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff\": container with ID starting with 4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff not found: ID does not exist" containerID="4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.943001 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff"} err="failed to get container status \"4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff\": rpc error: code = NotFound desc = could not find container \"4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff\": container with ID starting with 4f8f69ae46cad41b2ef608579b8d8e1439abe8ee8841b7a11bf02c8faccb46ff not found: ID does not exist" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.943039 4805 scope.go:117] "RemoveContainer" containerID="8c12874fe0c737fd2a06b1f7006fb576bce7c13688dbb3a1bcf3eb0e46b4243a" Dec 16 13:15:40 crc kubenswrapper[4805]: E1216 13:15:40.944248 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c12874fe0c737fd2a06b1f7006fb576bce7c13688dbb3a1bcf3eb0e46b4243a\": container with ID starting with 8c12874fe0c737fd2a06b1f7006fb576bce7c13688dbb3a1bcf3eb0e46b4243a not found: ID does not exist" containerID="8c12874fe0c737fd2a06b1f7006fb576bce7c13688dbb3a1bcf3eb0e46b4243a" Dec 16 13:15:40 crc kubenswrapper[4805]: I1216 13:15:40.944286 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c12874fe0c737fd2a06b1f7006fb576bce7c13688dbb3a1bcf3eb0e46b4243a"} err="failed to get container status \"8c12874fe0c737fd2a06b1f7006fb576bce7c13688dbb3a1bcf3eb0e46b4243a\": rpc error: code = NotFound desc = could not find container \"8c12874fe0c737fd2a06b1f7006fb576bce7c13688dbb3a1bcf3eb0e46b4243a\": container with ID starting with 8c12874fe0c737fd2a06b1f7006fb576bce7c13688dbb3a1bcf3eb0e46b4243a not found: ID does not exist" Dec 16 13:15:42 crc kubenswrapper[4805]: I1216 13:15:42.532123 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" path="/var/lib/kubelet/pods/d70029f1-f430-4c7b-a7ea-c01b7bed3779/volumes" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.589250 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 16 13:15:46 crc kubenswrapper[4805]: E1216 13:15:46.590361 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a2c3a4-408a-4437-9a22-bc7c41f87222" containerName="tempest-tests-tempest-tests-runner" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.590383 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a2c3a4-408a-4437-9a22-bc7c41f87222" containerName="tempest-tests-tempest-tests-runner" Dec 16 13:15:46 crc kubenswrapper[4805]: E1216 13:15:46.590400 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerName="extract-content" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.590407 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerName="extract-content" Dec 16 13:15:46 crc kubenswrapper[4805]: E1216 13:15:46.590427 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerName="registry-server" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.590435 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerName="registry-server" Dec 16 13:15:46 crc kubenswrapper[4805]: E1216 13:15:46.590451 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerName="extract-utilities" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.590458 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerName="extract-utilities" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.590713 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70029f1-f430-4c7b-a7ea-c01b7bed3779" containerName="registry-server" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.590734 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a2c3a4-408a-4437-9a22-bc7c41f87222" containerName="tempest-tests-tempest-tests-runner" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.591600 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.596470 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j58t8" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.610003 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.742495 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbvhv\" (UniqueName: \"kubernetes.io/projected/5858c1f1-d24f-4e97-85a1-b84b85c6a0ce-kube-api-access-fbvhv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5858c1f1-d24f-4e97-85a1-b84b85c6a0ce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.742598 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5858c1f1-d24f-4e97-85a1-b84b85c6a0ce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.844806 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbvhv\" (UniqueName: \"kubernetes.io/projected/5858c1f1-d24f-4e97-85a1-b84b85c6a0ce-kube-api-access-fbvhv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5858c1f1-d24f-4e97-85a1-b84b85c6a0ce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.845913 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5858c1f1-d24f-4e97-85a1-b84b85c6a0ce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.846551 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5858c1f1-d24f-4e97-85a1-b84b85c6a0ce\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.868581 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbvhv\" (UniqueName: \"kubernetes.io/projected/5858c1f1-d24f-4e97-85a1-b84b85c6a0ce-kube-api-access-fbvhv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5858c1f1-d24f-4e97-85a1-b84b85c6a0ce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.889327 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5858c1f1-d24f-4e97-85a1-b84b85c6a0ce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 16 13:15:46 crc kubenswrapper[4805]: I1216 13:15:46.913124 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 16 13:15:47 crc kubenswrapper[4805]: I1216 13:15:47.382635 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 16 13:15:47 crc kubenswrapper[4805]: I1216 13:15:47.909602 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"5858c1f1-d24f-4e97-85a1-b84b85c6a0ce","Type":"ContainerStarted","Data":"ecf0af992aab94fa808fed36fb2fc84a55e6f5302c1b1c24fedbfd8cd3d4b1b1"} Dec 16 13:15:48 crc kubenswrapper[4805]: I1216 13:15:48.928906 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"5858c1f1-d24f-4e97-85a1-b84b85c6a0ce","Type":"ContainerStarted","Data":"287406e4334b121d0c8e68d3ab9020e9abc8ad54fc0cf25007db564b813290b8"} Dec 16 13:15:48 crc kubenswrapper[4805]: I1216 13:15:48.954944 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.7613541750000001 podStartE2EDuration="2.954916826s" podCreationTimestamp="2025-12-16 13:15:46 +0000 UTC" firstStartedPulling="2025-12-16 13:15:47.395267413 +0000 UTC m=+4821.113525218" lastFinishedPulling="2025-12-16 13:15:48.588830064 +0000 UTC m=+4822.307087869" observedRunningTime="2025-12-16 13:15:48.941923555 +0000 UTC m=+4822.660181380" watchObservedRunningTime="2025-12-16 13:15:48.954916826 +0000 UTC m=+4822.673174671" Dec 16 13:15:57 crc kubenswrapper[4805]: I1216 13:15:57.072100 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:15:57 crc kubenswrapper[4805]: I1216 13:15:57.072633 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:16:01 crc kubenswrapper[4805]: I1216 13:16:01.390356 4805 scope.go:117] "RemoveContainer" containerID="d94d654db4b56ce7b9711854867cbdc5ffc0c00e262109b75f53332967c865a1" Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.131287 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dbrmg/must-gather-dtw2n"] Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.133773 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/must-gather-dtw2n" Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.141432 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dbrmg"/"kube-root-ca.crt" Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.142696 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dbrmg"/"openshift-service-ca.crt" Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.142817 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dbrmg"/"default-dockercfg-8sqkt" Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.164548 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dbrmg/must-gather-dtw2n"] Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.283227 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhthc\" (UniqueName: \"kubernetes.io/projected/42a345f5-d310-4907-8fe5-be3b705c774d-kube-api-access-jhthc\") pod \"must-gather-dtw2n\" (UID: \"42a345f5-d310-4907-8fe5-be3b705c774d\") " pod="openshift-must-gather-dbrmg/must-gather-dtw2n" Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.283558 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42a345f5-d310-4907-8fe5-be3b705c774d-must-gather-output\") pod \"must-gather-dtw2n\" (UID: \"42a345f5-d310-4907-8fe5-be3b705c774d\") " pod="openshift-must-gather-dbrmg/must-gather-dtw2n" Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.385593 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhthc\" (UniqueName: \"kubernetes.io/projected/42a345f5-d310-4907-8fe5-be3b705c774d-kube-api-access-jhthc\") pod \"must-gather-dtw2n\" (UID: \"42a345f5-d310-4907-8fe5-be3b705c774d\") " pod="openshift-must-gather-dbrmg/must-gather-dtw2n" Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.385675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42a345f5-d310-4907-8fe5-be3b705c774d-must-gather-output\") pod \"must-gather-dtw2n\" (UID: \"42a345f5-d310-4907-8fe5-be3b705c774d\") " pod="openshift-must-gather-dbrmg/must-gather-dtw2n" Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.386122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42a345f5-d310-4907-8fe5-be3b705c774d-must-gather-output\") pod \"must-gather-dtw2n\" (UID: \"42a345f5-d310-4907-8fe5-be3b705c774d\") " pod="openshift-must-gather-dbrmg/must-gather-dtw2n" Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.411497 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhthc\" (UniqueName: \"kubernetes.io/projected/42a345f5-d310-4907-8fe5-be3b705c774d-kube-api-access-jhthc\") pod \"must-gather-dtw2n\" (UID: \"42a345f5-d310-4907-8fe5-be3b705c774d\") " pod="openshift-must-gather-dbrmg/must-gather-dtw2n" Dec 16 13:16:13 crc kubenswrapper[4805]: I1216 13:16:13.457973 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/must-gather-dtw2n" Dec 16 13:16:14 crc kubenswrapper[4805]: I1216 13:16:14.010034 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dbrmg/must-gather-dtw2n"] Dec 16 13:16:14 crc kubenswrapper[4805]: I1216 13:16:14.170633 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/must-gather-dtw2n" event={"ID":"42a345f5-d310-4907-8fe5-be3b705c774d","Type":"ContainerStarted","Data":"a18472792050d35f37241baa41df6ff2b546336bef6cf02a8fb6cb192099f590"} Dec 16 13:16:23 crc kubenswrapper[4805]: I1216 13:16:23.658116 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/must-gather-dtw2n" event={"ID":"42a345f5-d310-4907-8fe5-be3b705c774d","Type":"ContainerStarted","Data":"c3e69f450bc782ee8c391eca9dfe352918256dd92925c190725eb8d7a9ca51c6"} Dec 16 13:16:23 crc kubenswrapper[4805]: I1216 13:16:23.658631 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/must-gather-dtw2n" event={"ID":"42a345f5-d310-4907-8fe5-be3b705c774d","Type":"ContainerStarted","Data":"6956c477298e37297736ed06f3b1feb79ea841d83cdb0fbdd981698e338e24ef"} Dec 16 13:16:23 crc kubenswrapper[4805]: I1216 13:16:23.684641 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dbrmg/must-gather-dtw2n" podStartSLOduration=1.524934847 podStartE2EDuration="10.684621387s" podCreationTimestamp="2025-12-16 13:16:13 +0000 UTC" firstStartedPulling="2025-12-16 13:16:14.034797197 +0000 UTC m=+4847.753055002" lastFinishedPulling="2025-12-16 13:16:23.194483727 +0000 UTC m=+4856.912741542" observedRunningTime="2025-12-16 13:16:23.678019028 +0000 UTC m=+4857.396276833" watchObservedRunningTime="2025-12-16 13:16:23.684621387 +0000 UTC m=+4857.402879202" Dec 16 13:16:27 crc kubenswrapper[4805]: I1216 13:16:27.071172 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:16:27 crc kubenswrapper[4805]: I1216 13:16:27.072385 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:16:27 crc kubenswrapper[4805]: I1216 13:16:27.072446 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 13:16:27 crc kubenswrapper[4805]: I1216 13:16:27.073246 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4fff8ab81a79c6040aa17493152ea9972a4d9da7b64f4dc0cd2a4c74183132a"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 13:16:27 crc kubenswrapper[4805]: I1216 13:16:27.073304 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://f4fff8ab81a79c6040aa17493152ea9972a4d9da7b64f4dc0cd2a4c74183132a" gracePeriod=600 Dec 16 13:16:27 crc kubenswrapper[4805]: E1216 13:16:27.682964 4805 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.27:56326->38.102.83.27:32803: write tcp 38.102.83.27:56326->38.102.83.27:32803: write: connection reset by peer Dec 16 13:16:27 crc kubenswrapper[4805]: I1216 13:16:27.704281 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="f4fff8ab81a79c6040aa17493152ea9972a4d9da7b64f4dc0cd2a4c74183132a" exitCode=0 Dec 16 13:16:27 crc kubenswrapper[4805]: I1216 13:16:27.704325 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"f4fff8ab81a79c6040aa17493152ea9972a4d9da7b64f4dc0cd2a4c74183132a"} Dec 16 13:16:27 crc kubenswrapper[4805]: I1216 13:16:27.704358 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2"} Dec 16 13:16:27 crc kubenswrapper[4805]: I1216 13:16:27.704379 4805 scope.go:117] "RemoveContainer" containerID="3af58145acd1d5e06b5f6c5ae9b29647334d9c994ff8a8cbca1166eff61b90d4" Dec 16 13:16:28 crc kubenswrapper[4805]: I1216 13:16:28.640328 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dbrmg/crc-debug-7n47l"] Dec 16 13:16:28 crc kubenswrapper[4805]: I1216 13:16:28.642234 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-7n47l" Dec 16 13:16:28 crc kubenswrapper[4805]: I1216 13:16:28.757902 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-host\") pod \"crc-debug-7n47l\" (UID: \"edb292d1-f8cf-4ff6-aae5-7734adac5c9b\") " pod="openshift-must-gather-dbrmg/crc-debug-7n47l" Dec 16 13:16:28 crc kubenswrapper[4805]: I1216 13:16:28.758124 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvgm9\" (UniqueName: \"kubernetes.io/projected/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-kube-api-access-bvgm9\") pod \"crc-debug-7n47l\" (UID: \"edb292d1-f8cf-4ff6-aae5-7734adac5c9b\") " pod="openshift-must-gather-dbrmg/crc-debug-7n47l" Dec 16 13:16:28 crc kubenswrapper[4805]: I1216 13:16:28.998459 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvgm9\" (UniqueName: \"kubernetes.io/projected/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-kube-api-access-bvgm9\") pod \"crc-debug-7n47l\" (UID: \"edb292d1-f8cf-4ff6-aae5-7734adac5c9b\") " pod="openshift-must-gather-dbrmg/crc-debug-7n47l" Dec 16 13:16:28 crc kubenswrapper[4805]: I1216 13:16:28.998863 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-host\") pod \"crc-debug-7n47l\" (UID: \"edb292d1-f8cf-4ff6-aae5-7734adac5c9b\") " pod="openshift-must-gather-dbrmg/crc-debug-7n47l" Dec 16 13:16:28 crc kubenswrapper[4805]: I1216 13:16:28.999166 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-host\") pod \"crc-debug-7n47l\" (UID: \"edb292d1-f8cf-4ff6-aae5-7734adac5c9b\") " pod="openshift-must-gather-dbrmg/crc-debug-7n47l" Dec 16 13:16:29 crc kubenswrapper[4805]: I1216 13:16:29.064662 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvgm9\" (UniqueName: \"kubernetes.io/projected/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-kube-api-access-bvgm9\") pod \"crc-debug-7n47l\" (UID: \"edb292d1-f8cf-4ff6-aae5-7734adac5c9b\") " pod="openshift-must-gather-dbrmg/crc-debug-7n47l" Dec 16 13:16:29 crc kubenswrapper[4805]: I1216 13:16:29.265433 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-7n47l" Dec 16 13:16:29 crc kubenswrapper[4805]: I1216 13:16:29.730376 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/crc-debug-7n47l" event={"ID":"edb292d1-f8cf-4ff6-aae5-7734adac5c9b","Type":"ContainerStarted","Data":"c8725f3334301c06d05335b2b54d28acdc95c68fd526aa2b95674b12586b3f5d"} Dec 16 13:16:41 crc kubenswrapper[4805]: I1216 13:16:41.883003 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/crc-debug-7n47l" event={"ID":"edb292d1-f8cf-4ff6-aae5-7734adac5c9b","Type":"ContainerStarted","Data":"91af1edf2fbc6faaf92f46b3963390491e551c85fdbf36a578efb2dbe07d79bf"} Dec 16 13:16:41 crc kubenswrapper[4805]: I1216 13:16:41.903558 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dbrmg/crc-debug-7n47l" podStartSLOduration=2.267283902 podStartE2EDuration="13.903528703s" podCreationTimestamp="2025-12-16 13:16:28 +0000 UTC" firstStartedPulling="2025-12-16 13:16:29.369976415 +0000 UTC m=+4863.088234220" lastFinishedPulling="2025-12-16 13:16:41.006221226 +0000 UTC m=+4874.724479021" observedRunningTime="2025-12-16 13:16:41.899445907 +0000 UTC m=+4875.617703712" watchObservedRunningTime="2025-12-16 13:16:41.903528703 +0000 UTC m=+4875.621786518" Dec 16 13:17:38 crc kubenswrapper[4805]: I1216 13:17:38.418540 4805 generic.go:334] "Generic (PLEG): container finished" podID="edb292d1-f8cf-4ff6-aae5-7734adac5c9b" containerID="91af1edf2fbc6faaf92f46b3963390491e551c85fdbf36a578efb2dbe07d79bf" exitCode=0 Dec 16 13:17:38 crc kubenswrapper[4805]: I1216 13:17:38.418626 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/crc-debug-7n47l" event={"ID":"edb292d1-f8cf-4ff6-aae5-7734adac5c9b","Type":"ContainerDied","Data":"91af1edf2fbc6faaf92f46b3963390491e551c85fdbf36a578efb2dbe07d79bf"} Dec 16 13:17:39 crc kubenswrapper[4805]: I1216 13:17:39.792330 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-7n47l" Dec 16 13:17:39 crc kubenswrapper[4805]: I1216 13:17:39.812980 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-host\") pod \"edb292d1-f8cf-4ff6-aae5-7734adac5c9b\" (UID: \"edb292d1-f8cf-4ff6-aae5-7734adac5c9b\") " Dec 16 13:17:39 crc kubenswrapper[4805]: I1216 13:17:39.813104 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-host" (OuterVolumeSpecName: "host") pod "edb292d1-f8cf-4ff6-aae5-7734adac5c9b" (UID: "edb292d1-f8cf-4ff6-aae5-7734adac5c9b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 13:17:39 crc kubenswrapper[4805]: I1216 13:17:39.813846 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-host\") on node \"crc\" DevicePath \"\"" Dec 16 13:17:39 crc kubenswrapper[4805]: I1216 13:17:39.838613 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dbrmg/crc-debug-7n47l"] Dec 16 13:17:39 crc kubenswrapper[4805]: I1216 13:17:39.849921 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dbrmg/crc-debug-7n47l"] Dec 16 13:17:39 crc kubenswrapper[4805]: I1216 13:17:39.915105 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvgm9\" (UniqueName: \"kubernetes.io/projected/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-kube-api-access-bvgm9\") pod \"edb292d1-f8cf-4ff6-aae5-7734adac5c9b\" (UID: \"edb292d1-f8cf-4ff6-aae5-7734adac5c9b\") " Dec 16 13:17:39 crc kubenswrapper[4805]: I1216 13:17:39.923391 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-kube-api-access-bvgm9" (OuterVolumeSpecName: "kube-api-access-bvgm9") pod "edb292d1-f8cf-4ff6-aae5-7734adac5c9b" (UID: "edb292d1-f8cf-4ff6-aae5-7734adac5c9b"). InnerVolumeSpecName "kube-api-access-bvgm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:17:40 crc kubenswrapper[4805]: I1216 13:17:40.017808 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvgm9\" (UniqueName: \"kubernetes.io/projected/edb292d1-f8cf-4ff6-aae5-7734adac5c9b-kube-api-access-bvgm9\") on node \"crc\" DevicePath \"\"" Dec 16 13:17:40 crc kubenswrapper[4805]: I1216 13:17:40.437774 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8725f3334301c06d05335b2b54d28acdc95c68fd526aa2b95674b12586b3f5d" Dec 16 13:17:40 crc kubenswrapper[4805]: I1216 13:17:40.437818 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-7n47l" Dec 16 13:17:40 crc kubenswrapper[4805]: I1216 13:17:40.546405 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb292d1-f8cf-4ff6-aae5-7734adac5c9b" path="/var/lib/kubelet/pods/edb292d1-f8cf-4ff6-aae5-7734adac5c9b/volumes" Dec 16 13:17:41 crc kubenswrapper[4805]: I1216 13:17:41.081566 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dbrmg/crc-debug-pf9d6"] Dec 16 13:17:41 crc kubenswrapper[4805]: E1216 13:17:41.082008 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb292d1-f8cf-4ff6-aae5-7734adac5c9b" containerName="container-00" Dec 16 13:17:41 crc kubenswrapper[4805]: I1216 13:17:41.082023 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb292d1-f8cf-4ff6-aae5-7734adac5c9b" containerName="container-00" Dec 16 13:17:41 crc kubenswrapper[4805]: I1216 13:17:41.082292 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb292d1-f8cf-4ff6-aae5-7734adac5c9b" containerName="container-00" Dec 16 13:17:41 crc kubenswrapper[4805]: I1216 13:17:41.082956 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" Dec 16 13:17:41 crc kubenswrapper[4805]: I1216 13:17:41.244918 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjnbv\" (UniqueName: \"kubernetes.io/projected/49fe7e18-6037-4431-85bb-b21e26cc632e-kube-api-access-fjnbv\") pod \"crc-debug-pf9d6\" (UID: \"49fe7e18-6037-4431-85bb-b21e26cc632e\") " pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" Dec 16 13:17:41 crc kubenswrapper[4805]: I1216 13:17:41.245346 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fe7e18-6037-4431-85bb-b21e26cc632e-host\") pod \"crc-debug-pf9d6\" (UID: \"49fe7e18-6037-4431-85bb-b21e26cc632e\") " pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" Dec 16 13:17:41 crc kubenswrapper[4805]: I1216 13:17:41.346980 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjnbv\" (UniqueName: \"kubernetes.io/projected/49fe7e18-6037-4431-85bb-b21e26cc632e-kube-api-access-fjnbv\") pod \"crc-debug-pf9d6\" (UID: \"49fe7e18-6037-4431-85bb-b21e26cc632e\") " pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" Dec 16 13:17:41 crc kubenswrapper[4805]: I1216 13:17:41.347059 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fe7e18-6037-4431-85bb-b21e26cc632e-host\") pod \"crc-debug-pf9d6\" (UID: \"49fe7e18-6037-4431-85bb-b21e26cc632e\") " pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" Dec 16 13:17:41 crc kubenswrapper[4805]: I1216 13:17:41.347276 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fe7e18-6037-4431-85bb-b21e26cc632e-host\") pod \"crc-debug-pf9d6\" (UID: \"49fe7e18-6037-4431-85bb-b21e26cc632e\") " pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" Dec 16 13:17:41 crc kubenswrapper[4805]: I1216 13:17:41.494244 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjnbv\" (UniqueName: \"kubernetes.io/projected/49fe7e18-6037-4431-85bb-b21e26cc632e-kube-api-access-fjnbv\") pod \"crc-debug-pf9d6\" (UID: \"49fe7e18-6037-4431-85bb-b21e26cc632e\") " pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" Dec 16 13:17:41 crc kubenswrapper[4805]: I1216 13:17:41.711070 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" Dec 16 13:17:42 crc kubenswrapper[4805]: I1216 13:17:42.456091 4805 generic.go:334] "Generic (PLEG): container finished" podID="49fe7e18-6037-4431-85bb-b21e26cc632e" containerID="dd054bcdce08e68832e4fd2aae1fde67328320e9e9e841773bf77ad3a4fb018d" exitCode=0 Dec 16 13:17:42 crc kubenswrapper[4805]: I1216 13:17:42.456180 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" event={"ID":"49fe7e18-6037-4431-85bb-b21e26cc632e","Type":"ContainerDied","Data":"dd054bcdce08e68832e4fd2aae1fde67328320e9e9e841773bf77ad3a4fb018d"} Dec 16 13:17:42 crc kubenswrapper[4805]: I1216 13:17:42.456605 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" event={"ID":"49fe7e18-6037-4431-85bb-b21e26cc632e","Type":"ContainerStarted","Data":"556425b212c2cfc85d28ef04d846580d61329ff9d4c8d261cfc15da207e79169"} Dec 16 13:17:43 crc kubenswrapper[4805]: I1216 13:17:43.569511 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" Dec 16 13:17:43 crc kubenswrapper[4805]: I1216 13:17:43.752105 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjnbv\" (UniqueName: \"kubernetes.io/projected/49fe7e18-6037-4431-85bb-b21e26cc632e-kube-api-access-fjnbv\") pod \"49fe7e18-6037-4431-85bb-b21e26cc632e\" (UID: \"49fe7e18-6037-4431-85bb-b21e26cc632e\") " Dec 16 13:17:43 crc kubenswrapper[4805]: I1216 13:17:43.752210 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fe7e18-6037-4431-85bb-b21e26cc632e-host\") pod \"49fe7e18-6037-4431-85bb-b21e26cc632e\" (UID: \"49fe7e18-6037-4431-85bb-b21e26cc632e\") " Dec 16 13:17:43 crc kubenswrapper[4805]: I1216 13:17:43.752572 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fe7e18-6037-4431-85bb-b21e26cc632e-host" (OuterVolumeSpecName: "host") pod "49fe7e18-6037-4431-85bb-b21e26cc632e" (UID: "49fe7e18-6037-4431-85bb-b21e26cc632e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 13:17:43 crc kubenswrapper[4805]: I1216 13:17:43.776076 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49fe7e18-6037-4431-85bb-b21e26cc632e-kube-api-access-fjnbv" (OuterVolumeSpecName: "kube-api-access-fjnbv") pod "49fe7e18-6037-4431-85bb-b21e26cc632e" (UID: "49fe7e18-6037-4431-85bb-b21e26cc632e"). InnerVolumeSpecName "kube-api-access-fjnbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:17:43 crc kubenswrapper[4805]: I1216 13:17:43.853909 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjnbv\" (UniqueName: \"kubernetes.io/projected/49fe7e18-6037-4431-85bb-b21e26cc632e-kube-api-access-fjnbv\") on node \"crc\" DevicePath \"\"" Dec 16 13:17:43 crc kubenswrapper[4805]: I1216 13:17:43.853946 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fe7e18-6037-4431-85bb-b21e26cc632e-host\") on node \"crc\" DevicePath \"\"" Dec 16 13:17:44 crc kubenswrapper[4805]: I1216 13:17:44.489022 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" event={"ID":"49fe7e18-6037-4431-85bb-b21e26cc632e","Type":"ContainerDied","Data":"556425b212c2cfc85d28ef04d846580d61329ff9d4c8d261cfc15da207e79169"} Dec 16 13:17:44 crc kubenswrapper[4805]: I1216 13:17:44.489258 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="556425b212c2cfc85d28ef04d846580d61329ff9d4c8d261cfc15da207e79169" Dec 16 13:17:44 crc kubenswrapper[4805]: I1216 13:17:44.489065 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-pf9d6" Dec 16 13:17:44 crc kubenswrapper[4805]: I1216 13:17:44.701074 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dbrmg/crc-debug-pf9d6"] Dec 16 13:17:44 crc kubenswrapper[4805]: I1216 13:17:44.715040 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dbrmg/crc-debug-pf9d6"] Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.045199 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dbrmg/crc-debug-tmltb"] Dec 16 13:17:46 crc kubenswrapper[4805]: E1216 13:17:46.046021 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fe7e18-6037-4431-85bb-b21e26cc632e" containerName="container-00" Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.046035 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fe7e18-6037-4431-85bb-b21e26cc632e" containerName="container-00" Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.046303 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fe7e18-6037-4431-85bb-b21e26cc632e" containerName="container-00" Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.049083 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-tmltb" Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.086651 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk4hs\" (UniqueName: \"kubernetes.io/projected/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-kube-api-access-bk4hs\") pod \"crc-debug-tmltb\" (UID: \"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f\") " pod="openshift-must-gather-dbrmg/crc-debug-tmltb" Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.086797 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-host\") pod \"crc-debug-tmltb\" (UID: \"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f\") " pod="openshift-must-gather-dbrmg/crc-debug-tmltb" Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.188358 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-host\") pod \"crc-debug-tmltb\" (UID: \"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f\") " pod="openshift-must-gather-dbrmg/crc-debug-tmltb" Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.188515 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk4hs\" (UniqueName: \"kubernetes.io/projected/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-kube-api-access-bk4hs\") pod \"crc-debug-tmltb\" (UID: \"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f\") " pod="openshift-must-gather-dbrmg/crc-debug-tmltb" Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.189004 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-host\") pod \"crc-debug-tmltb\" (UID: \"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f\") " pod="openshift-must-gather-dbrmg/crc-debug-tmltb" Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.218271 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk4hs\" (UniqueName: \"kubernetes.io/projected/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-kube-api-access-bk4hs\") pod \"crc-debug-tmltb\" (UID: \"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f\") " pod="openshift-must-gather-dbrmg/crc-debug-tmltb" Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.375898 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-tmltb" Dec 16 13:17:46 crc kubenswrapper[4805]: W1216 13:17:46.406388 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b4b1b4a_4955_49fc_9f7d_abc7ae2ead1f.slice/crio-a01562cf1149eaba607b9efa52802b3c62acd4f67d6cf2be04ec568be2327579 WatchSource:0}: Error finding container a01562cf1149eaba607b9efa52802b3c62acd4f67d6cf2be04ec568be2327579: Status 404 returned error can't find the container with id a01562cf1149eaba607b9efa52802b3c62acd4f67d6cf2be04ec568be2327579 Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.539469 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fe7e18-6037-4431-85bb-b21e26cc632e" path="/var/lib/kubelet/pods/49fe7e18-6037-4431-85bb-b21e26cc632e/volumes" Dec 16 13:17:46 crc kubenswrapper[4805]: I1216 13:17:46.543211 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/crc-debug-tmltb" event={"ID":"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f","Type":"ContainerStarted","Data":"a01562cf1149eaba607b9efa52802b3c62acd4f67d6cf2be04ec568be2327579"} Dec 16 13:17:47 crc kubenswrapper[4805]: I1216 13:17:47.556354 4805 generic.go:334] "Generic (PLEG): container finished" podID="5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f" containerID="acdbe89c318aec8e37e69f6e7cc23fa5a9830966dde9a325843eadadc9a8fd03" exitCode=0 Dec 16 13:17:47 crc kubenswrapper[4805]: I1216 13:17:47.556682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/crc-debug-tmltb" event={"ID":"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f","Type":"ContainerDied","Data":"acdbe89c318aec8e37e69f6e7cc23fa5a9830966dde9a325843eadadc9a8fd03"} Dec 16 13:17:47 crc kubenswrapper[4805]: I1216 13:17:47.595622 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dbrmg/crc-debug-tmltb"] Dec 16 13:17:47 crc kubenswrapper[4805]: I1216 13:17:47.622479 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dbrmg/crc-debug-tmltb"] Dec 16 13:17:48 crc kubenswrapper[4805]: I1216 13:17:48.682445 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-tmltb" Dec 16 13:17:48 crc kubenswrapper[4805]: I1216 13:17:48.848440 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-host\") pod \"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f\" (UID: \"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f\") " Dec 16 13:17:48 crc kubenswrapper[4805]: I1216 13:17:48.848566 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-host" (OuterVolumeSpecName: "host") pod "5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f" (UID: "5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 13:17:48 crc kubenswrapper[4805]: I1216 13:17:48.848703 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk4hs\" (UniqueName: \"kubernetes.io/projected/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-kube-api-access-bk4hs\") pod \"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f\" (UID: \"5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f\") " Dec 16 13:17:48 crc kubenswrapper[4805]: I1216 13:17:48.849212 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-host\") on node \"crc\" DevicePath \"\"" Dec 16 13:17:48 crc kubenswrapper[4805]: I1216 13:17:48.860450 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-kube-api-access-bk4hs" (OuterVolumeSpecName: "kube-api-access-bk4hs") pod "5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f" (UID: "5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f"). InnerVolumeSpecName "kube-api-access-bk4hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:17:48 crc kubenswrapper[4805]: I1216 13:17:48.950549 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk4hs\" (UniqueName: \"kubernetes.io/projected/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f-kube-api-access-bk4hs\") on node \"crc\" DevicePath \"\"" Dec 16 13:17:49 crc kubenswrapper[4805]: I1216 13:17:49.574283 4805 scope.go:117] "RemoveContainer" containerID="acdbe89c318aec8e37e69f6e7cc23fa5a9830966dde9a325843eadadc9a8fd03" Dec 16 13:17:49 crc kubenswrapper[4805]: I1216 13:17:49.574343 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/crc-debug-tmltb" Dec 16 13:17:50 crc kubenswrapper[4805]: I1216 13:17:50.535263 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f" path="/var/lib/kubelet/pods/5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f/volumes" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.349037 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mr4s7"] Dec 16 13:18:01 crc kubenswrapper[4805]: E1216 13:18:01.350034 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f" containerName="container-00" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.350051 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f" containerName="container-00" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.350322 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4b1b4a-4955-49fc-9f7d-abc7ae2ead1f" containerName="container-00" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.352171 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.372835 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mr4s7"] Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.399385 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-catalog-content\") pod \"community-operators-mr4s7\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.399483 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-utilities\") pod \"community-operators-mr4s7\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.399509 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj69f\" (UniqueName: \"kubernetes.io/projected/5003a38e-6925-4bb0-8ff0-d9f35609e352-kube-api-access-dj69f\") pod \"community-operators-mr4s7\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.501481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj69f\" (UniqueName: \"kubernetes.io/projected/5003a38e-6925-4bb0-8ff0-d9f35609e352-kube-api-access-dj69f\") pod \"community-operators-mr4s7\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.501678 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-catalog-content\") pod \"community-operators-mr4s7\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.501783 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-utilities\") pod \"community-operators-mr4s7\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.502355 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-catalog-content\") pod \"community-operators-mr4s7\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.502439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-utilities\") pod \"community-operators-mr4s7\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.523082 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj69f\" (UniqueName: \"kubernetes.io/projected/5003a38e-6925-4bb0-8ff0-d9f35609e352-kube-api-access-dj69f\") pod \"community-operators-mr4s7\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:01 crc kubenswrapper[4805]: I1216 13:18:01.680668 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:02 crc kubenswrapper[4805]: I1216 13:18:02.406223 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mr4s7"] Dec 16 13:18:02 crc kubenswrapper[4805]: I1216 13:18:02.686410 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr4s7" event={"ID":"5003a38e-6925-4bb0-8ff0-d9f35609e352","Type":"ContainerStarted","Data":"e0548ad466b35492efce49de4307a04e9170fc1929309234fa447d6783abe13d"} Dec 16 13:18:02 crc kubenswrapper[4805]: I1216 13:18:02.686642 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr4s7" event={"ID":"5003a38e-6925-4bb0-8ff0-d9f35609e352","Type":"ContainerStarted","Data":"2e5fd1c4050978a5450fa571dec681d57a0a095bbcb6e2cb90b1b25af478e5a1"} Dec 16 13:18:03 crc kubenswrapper[4805]: I1216 13:18:03.699577 4805 generic.go:334] "Generic (PLEG): container finished" podID="5003a38e-6925-4bb0-8ff0-d9f35609e352" containerID="e0548ad466b35492efce49de4307a04e9170fc1929309234fa447d6783abe13d" exitCode=0 Dec 16 13:18:03 crc kubenswrapper[4805]: I1216 13:18:03.699798 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr4s7" event={"ID":"5003a38e-6925-4bb0-8ff0-d9f35609e352","Type":"ContainerDied","Data":"e0548ad466b35492efce49de4307a04e9170fc1929309234fa447d6783abe13d"} Dec 16 13:18:05 crc kubenswrapper[4805]: I1216 13:18:05.722986 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr4s7" event={"ID":"5003a38e-6925-4bb0-8ff0-d9f35609e352","Type":"ContainerStarted","Data":"6f454cf7c1f6d11638f1ab2d66990bb14cf28fb00be0be8c5d8b87fb9828a78a"} Dec 16 13:18:06 crc kubenswrapper[4805]: I1216 13:18:06.731938 4805 generic.go:334] "Generic (PLEG): container finished" podID="5003a38e-6925-4bb0-8ff0-d9f35609e352" containerID="6f454cf7c1f6d11638f1ab2d66990bb14cf28fb00be0be8c5d8b87fb9828a78a" exitCode=0 Dec 16 13:18:06 crc kubenswrapper[4805]: I1216 13:18:06.732002 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr4s7" event={"ID":"5003a38e-6925-4bb0-8ff0-d9f35609e352","Type":"ContainerDied","Data":"6f454cf7c1f6d11638f1ab2d66990bb14cf28fb00be0be8c5d8b87fb9828a78a"} Dec 16 13:18:07 crc kubenswrapper[4805]: I1216 13:18:07.764587 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr4s7" event={"ID":"5003a38e-6925-4bb0-8ff0-d9f35609e352","Type":"ContainerStarted","Data":"9c715308d04af0a6409fb5ec734ba0964da77e7cd0d9f035a46a4a4f73dbe5b3"} Dec 16 13:18:07 crc kubenswrapper[4805]: I1216 13:18:07.804670 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mr4s7" podStartSLOduration=3.360728625 podStartE2EDuration="6.804642737s" podCreationTimestamp="2025-12-16 13:18:01 +0000 UTC" firstStartedPulling="2025-12-16 13:18:03.70369356 +0000 UTC m=+4957.421951365" lastFinishedPulling="2025-12-16 13:18:07.147607682 +0000 UTC m=+4960.865865477" observedRunningTime="2025-12-16 13:18:07.79427919 +0000 UTC m=+4961.512536995" watchObservedRunningTime="2025-12-16 13:18:07.804642737 +0000 UTC m=+4961.522900552" Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.021106 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ctwkd"] Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.024191 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.038033 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctwkd"] Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.242706 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-utilities\") pod \"redhat-marketplace-ctwkd\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.244599 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqzbv\" (UniqueName: \"kubernetes.io/projected/0caee268-c8ae-430c-ab76-f2da8ef2260a-kube-api-access-xqzbv\") pod \"redhat-marketplace-ctwkd\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.244816 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-catalog-content\") pod \"redhat-marketplace-ctwkd\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.346299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqzbv\" (UniqueName: \"kubernetes.io/projected/0caee268-c8ae-430c-ab76-f2da8ef2260a-kube-api-access-xqzbv\") pod \"redhat-marketplace-ctwkd\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.346344 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-utilities\") pod \"redhat-marketplace-ctwkd\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.346422 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-catalog-content\") pod \"redhat-marketplace-ctwkd\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.347107 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-catalog-content\") pod \"redhat-marketplace-ctwkd\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.347949 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-utilities\") pod \"redhat-marketplace-ctwkd\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.681574 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:11 crc kubenswrapper[4805]: I1216 13:18:11.681637 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:16 crc kubenswrapper[4805]: I1216 13:18:16.598228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqzbv\" (UniqueName: \"kubernetes.io/projected/0caee268-c8ae-430c-ab76-f2da8ef2260a-kube-api-access-xqzbv\") pod \"redhat-marketplace-ctwkd\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:16 crc kubenswrapper[4805]: I1216 13:18:16.702122 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:16 crc kubenswrapper[4805]: I1216 13:18:16.743906 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:16 crc kubenswrapper[4805]: I1216 13:18:16.814719 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:16 crc kubenswrapper[4805]: I1216 13:18:16.956629 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mr4s7"] Dec 16 13:18:17 crc kubenswrapper[4805]: I1216 13:18:17.525381 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctwkd"] Dec 16 13:18:17 crc kubenswrapper[4805]: I1216 13:18:17.870747 4805 generic.go:334] "Generic (PLEG): container finished" podID="0caee268-c8ae-430c-ab76-f2da8ef2260a" containerID="97bd41370d1c95e1cf360450b0b66e28a0832f552b73ffac4943bb53f5154331" exitCode=0 Dec 16 13:18:17 crc kubenswrapper[4805]: I1216 13:18:17.870830 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctwkd" event={"ID":"0caee268-c8ae-430c-ab76-f2da8ef2260a","Type":"ContainerDied","Data":"97bd41370d1c95e1cf360450b0b66e28a0832f552b73ffac4943bb53f5154331"} Dec 16 13:18:17 crc kubenswrapper[4805]: I1216 13:18:17.871198 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctwkd" event={"ID":"0caee268-c8ae-430c-ab76-f2da8ef2260a","Type":"ContainerStarted","Data":"5647ca06cd1c1fd156ac28467bc9dfc7bc2d7a163fe819bdeb7bced656ffad2f"} Dec 16 13:18:17 crc kubenswrapper[4805]: I1216 13:18:17.871319 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mr4s7" podUID="5003a38e-6925-4bb0-8ff0-d9f35609e352" containerName="registry-server" containerID="cri-o://9c715308d04af0a6409fb5ec734ba0964da77e7cd0d9f035a46a4a4f73dbe5b3" gracePeriod=2 Dec 16 13:18:18 crc kubenswrapper[4805]: I1216 13:18:18.893619 4805 generic.go:334] "Generic (PLEG): container finished" podID="5003a38e-6925-4bb0-8ff0-d9f35609e352" containerID="9c715308d04af0a6409fb5ec734ba0964da77e7cd0d9f035a46a4a4f73dbe5b3" exitCode=0 Dec 16 13:18:18 crc kubenswrapper[4805]: I1216 13:18:18.893711 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr4s7" event={"ID":"5003a38e-6925-4bb0-8ff0-d9f35609e352","Type":"ContainerDied","Data":"9c715308d04af0a6409fb5ec734ba0964da77e7cd0d9f035a46a4a4f73dbe5b3"} Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.000221 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.095452 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-catalog-content\") pod \"5003a38e-6925-4bb0-8ff0-d9f35609e352\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.095728 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj69f\" (UniqueName: \"kubernetes.io/projected/5003a38e-6925-4bb0-8ff0-d9f35609e352-kube-api-access-dj69f\") pod \"5003a38e-6925-4bb0-8ff0-d9f35609e352\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.096226 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-utilities\") pod \"5003a38e-6925-4bb0-8ff0-d9f35609e352\" (UID: \"5003a38e-6925-4bb0-8ff0-d9f35609e352\") " Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.097620 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-utilities" (OuterVolumeSpecName: "utilities") pod "5003a38e-6925-4bb0-8ff0-d9f35609e352" (UID: "5003a38e-6925-4bb0-8ff0-d9f35609e352"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.098691 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.108534 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5003a38e-6925-4bb0-8ff0-d9f35609e352-kube-api-access-dj69f" (OuterVolumeSpecName: "kube-api-access-dj69f") pod "5003a38e-6925-4bb0-8ff0-d9f35609e352" (UID: "5003a38e-6925-4bb0-8ff0-d9f35609e352"). InnerVolumeSpecName "kube-api-access-dj69f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.153987 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5003a38e-6925-4bb0-8ff0-d9f35609e352" (UID: "5003a38e-6925-4bb0-8ff0-d9f35609e352"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.200248 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj69f\" (UniqueName: \"kubernetes.io/projected/5003a38e-6925-4bb0-8ff0-d9f35609e352-kube-api-access-dj69f\") on node \"crc\" DevicePath \"\"" Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.200289 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5003a38e-6925-4bb0-8ff0-d9f35609e352-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.930584 4805 generic.go:334] "Generic (PLEG): container finished" podID="0caee268-c8ae-430c-ab76-f2da8ef2260a" containerID="46d4fd39a8dd31ad692d010bd61fe3bb8c3f55291a5e3c5d02fdf8f933605f8e" exitCode=0 Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.930942 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctwkd" event={"ID":"0caee268-c8ae-430c-ab76-f2da8ef2260a","Type":"ContainerDied","Data":"46d4fd39a8dd31ad692d010bd61fe3bb8c3f55291a5e3c5d02fdf8f933605f8e"} Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.937801 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr4s7" event={"ID":"5003a38e-6925-4bb0-8ff0-d9f35609e352","Type":"ContainerDied","Data":"2e5fd1c4050978a5450fa571dec681d57a0a095bbcb6e2cb90b1b25af478e5a1"} Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.937856 4805 scope.go:117] "RemoveContainer" containerID="9c715308d04af0a6409fb5ec734ba0964da77e7cd0d9f035a46a4a4f73dbe5b3" Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.938119 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr4s7" Dec 16 13:18:19 crc kubenswrapper[4805]: I1216 13:18:19.989888 4805 scope.go:117] "RemoveContainer" containerID="6f454cf7c1f6d11638f1ab2d66990bb14cf28fb00be0be8c5d8b87fb9828a78a" Dec 16 13:18:20 crc kubenswrapper[4805]: I1216 13:18:20.013719 4805 scope.go:117] "RemoveContainer" containerID="e0548ad466b35492efce49de4307a04e9170fc1929309234fa447d6783abe13d" Dec 16 13:18:20 crc kubenswrapper[4805]: I1216 13:18:20.056043 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mr4s7"] Dec 16 13:18:20 crc kubenswrapper[4805]: I1216 13:18:20.066092 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mr4s7"] Dec 16 13:18:20 crc kubenswrapper[4805]: I1216 13:18:20.533635 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5003a38e-6925-4bb0-8ff0-d9f35609e352" path="/var/lib/kubelet/pods/5003a38e-6925-4bb0-8ff0-d9f35609e352/volumes" Dec 16 13:18:20 crc kubenswrapper[4805]: I1216 13:18:20.684793 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c96b7bb8b-r7k4g_7d8752f0-5469-43e4-9284-8dd712bfd63f/barbican-api/0.log" Dec 16 13:18:20 crc kubenswrapper[4805]: I1216 13:18:20.864833 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c96b7bb8b-r7k4g_7d8752f0-5469-43e4-9284-8dd712bfd63f/barbican-api-log/0.log" Dec 16 13:18:20 crc kubenswrapper[4805]: I1216 13:18:20.979301 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64dcddf9dd-sdrs9_beba05fa-4b9d-44f2-88f4-87611a38604b/barbican-keystone-listener/0.log" Dec 16 13:18:20 crc kubenswrapper[4805]: I1216 13:18:20.988497 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctwkd" event={"ID":"0caee268-c8ae-430c-ab76-f2da8ef2260a","Type":"ContainerStarted","Data":"e074cc5b7c73cbf4de01c7022fae2decf24b0efd7dc820a91b4d9add222091d0"} Dec 16 13:18:21 crc kubenswrapper[4805]: I1216 13:18:21.022614 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ctwkd" podStartSLOduration=7.459976789 podStartE2EDuration="10.022596551s" podCreationTimestamp="2025-12-16 13:18:11 +0000 UTC" firstStartedPulling="2025-12-16 13:18:17.874020788 +0000 UTC m=+4971.592278593" lastFinishedPulling="2025-12-16 13:18:20.43664055 +0000 UTC m=+4974.154898355" observedRunningTime="2025-12-16 13:18:21.016294671 +0000 UTC m=+4974.734552476" watchObservedRunningTime="2025-12-16 13:18:21.022596551 +0000 UTC m=+4974.740854366" Dec 16 13:18:21 crc kubenswrapper[4805]: I1216 13:18:21.227593 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64dcddf9dd-sdrs9_beba05fa-4b9d-44f2-88f4-87611a38604b/barbican-keystone-listener-log/0.log" Dec 16 13:18:21 crc kubenswrapper[4805]: I1216 13:18:21.578221 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c4448787f-5pqnk_c60ef5b9-ec24-43c3-ab83-7a6f10a972bc/barbican-worker/0.log" Dec 16 13:18:21 crc kubenswrapper[4805]: I1216 13:18:21.587905 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c4448787f-5pqnk_c60ef5b9-ec24-43c3-ab83-7a6f10a972bc/barbican-worker-log/0.log" Dec 16 13:18:22 crc kubenswrapper[4805]: I1216 13:18:22.139862 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb_d0396b0a-2aae-4507-a31e-cbd5f936f3eb/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:22 crc kubenswrapper[4805]: I1216 13:18:22.512028 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_129d76c2-c0d0-4b1f-8157-ecc2abae65be/ceilometer-central-agent/0.log" Dec 16 13:18:22 crc kubenswrapper[4805]: I1216 13:18:22.559694 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_129d76c2-c0d0-4b1f-8157-ecc2abae65be/ceilometer-notification-agent/0.log" Dec 16 13:18:22 crc kubenswrapper[4805]: I1216 13:18:22.656441 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_129d76c2-c0d0-4b1f-8157-ecc2abae65be/proxy-httpd/0.log" Dec 16 13:18:22 crc kubenswrapper[4805]: I1216 13:18:22.709388 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_129d76c2-c0d0-4b1f-8157-ecc2abae65be/sg-core/0.log" Dec 16 13:18:22 crc kubenswrapper[4805]: I1216 13:18:22.962397 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bdfe5fb7-a1ea-4cd6-887d-46b30ece2329/cinder-api/0.log" Dec 16 13:18:23 crc kubenswrapper[4805]: I1216 13:18:23.039827 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bdfe5fb7-a1ea-4cd6-887d-46b30ece2329/cinder-api-log/0.log" Dec 16 13:18:23 crc kubenswrapper[4805]: I1216 13:18:23.261942 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_67f02ee6-d5c4-4010-983e-c4ee5e24a2c0/cinder-scheduler/0.log" Dec 16 13:18:23 crc kubenswrapper[4805]: I1216 13:18:23.416475 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_67f02ee6-d5c4-4010-983e-c4ee5e24a2c0/probe/0.log" Dec 16 13:18:23 crc kubenswrapper[4805]: I1216 13:18:23.608257 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7_9b007dae-6dbd-429b-85a3-a2087c098b68/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:23 crc kubenswrapper[4805]: I1216 13:18:23.810795 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk_9cb116b7-43db-435a-b4b1-59447b57c611/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:23 crc kubenswrapper[4805]: I1216 13:18:23.937066 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-79dc84bdb7-mnp8m_d507896c-ad5d-4fd8-9df2-22feaa838e8f/init/0.log" Dec 16 13:18:24 crc kubenswrapper[4805]: I1216 13:18:24.222716 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-79dc84bdb7-mnp8m_d507896c-ad5d-4fd8-9df2-22feaa838e8f/init/0.log" Dec 16 13:18:24 crc kubenswrapper[4805]: I1216 13:18:24.342581 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-79dc84bdb7-mnp8m_d507896c-ad5d-4fd8-9df2-22feaa838e8f/dnsmasq-dns/0.log" Dec 16 13:18:24 crc kubenswrapper[4805]: I1216 13:18:24.360546 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9_d062ee72-cb69-4cdc-93fe-1474435c0904/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:24 crc kubenswrapper[4805]: I1216 13:18:24.650340 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_645792d6-df6f-4e8c-a3cb-0b150ff5cd37/glance-httpd/0.log" Dec 16 13:18:24 crc kubenswrapper[4805]: I1216 13:18:24.772818 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_645792d6-df6f-4e8c-a3cb-0b150ff5cd37/glance-log/0.log" Dec 16 13:18:24 crc kubenswrapper[4805]: I1216 13:18:24.905810 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b7ccad5e-bb55-4439-964a-2830bacf95e2/glance-httpd/0.log" Dec 16 13:18:24 crc kubenswrapper[4805]: I1216 13:18:24.937917 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b7ccad5e-bb55-4439-964a-2830bacf95e2/glance-log/0.log" Dec 16 13:18:25 crc kubenswrapper[4805]: I1216 13:18:25.166668 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69799999fb-rbm4h_d4af4b9e-77b2-4f27-8148-7000d60f2266/horizon/0.log" Dec 16 13:18:25 crc kubenswrapper[4805]: I1216 13:18:25.715713 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69799999fb-rbm4h_d4af4b9e-77b2-4f27-8148-7000d60f2266/horizon-log/0.log" Dec 16 13:18:25 crc kubenswrapper[4805]: I1216 13:18:25.881406 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2_cb170dbb-2c7a-417a-8254-849165c08ef4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:26 crc kubenswrapper[4805]: I1216 13:18:26.092396 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-brghs_c3d927b4-1bc7-4093-ad62-19dd87d9888a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:26 crc kubenswrapper[4805]: I1216 13:18:26.273292 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29431501-jlgq8_aad6bcdb-7a23-48fa-b79c-69932357cf9f/keystone-cron/0.log" Dec 16 13:18:26 crc kubenswrapper[4805]: I1216 13:18:26.530778 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e3d98762-b270-4f16-8dce-26f0662152ad/kube-state-metrics/0.log" Dec 16 13:18:26 crc kubenswrapper[4805]: I1216 13:18:26.725045 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4ft55_8b5953ad-0a78-4483-9097-2d4de5ad084e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:26 crc kubenswrapper[4805]: I1216 13:18:26.745888 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:26 crc kubenswrapper[4805]: I1216 13:18:26.745931 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:26 crc kubenswrapper[4805]: I1216 13:18:26.750855 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7ff49bdd98-ng49z_234428eb-9306-44b8-baf0-e4c6c0772699/keystone-api/0.log" Dec 16 13:18:26 crc kubenswrapper[4805]: I1216 13:18:26.802902 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:27 crc kubenswrapper[4805]: I1216 13:18:27.029340 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_76cc6f3a-504f-4096-8c08-efbcb51ad101/memcached/0.log" Dec 16 13:18:27 crc kubenswrapper[4805]: I1216 13:18:27.071182 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:18:27 crc kubenswrapper[4805]: I1216 13:18:27.071240 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:18:27 crc kubenswrapper[4805]: I1216 13:18:27.435680 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:27 crc kubenswrapper[4805]: I1216 13:18:27.496630 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctwkd"] Dec 16 13:18:27 crc kubenswrapper[4805]: I1216 13:18:27.509072 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-57fbfd7dcc-lq2v9_26f9167a-aa3e-4381-aac0-e0aaea7449a8/neutron-httpd/0.log" Dec 16 13:18:27 crc kubenswrapper[4805]: I1216 13:18:27.708010 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-57fbfd7dcc-lq2v9_26f9167a-aa3e-4381-aac0-e0aaea7449a8/neutron-api/0.log" Dec 16 13:18:27 crc kubenswrapper[4805]: I1216 13:18:27.914041 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g_f673c96e-f755-49d0-90ed-46ac92e151c2/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:28 crc kubenswrapper[4805]: I1216 13:18:28.648863 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7be1f6a3-3b42-4981-86ff-8851e931cd97/nova-cell0-conductor-conductor/0.log" Dec 16 13:18:28 crc kubenswrapper[4805]: I1216 13:18:28.762289 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a1c8741c-c8da-42f0-9ef8-9e419b58dcf4/nova-cell1-conductor-conductor/0.log" Dec 16 13:18:29 crc kubenswrapper[4805]: I1216 13:18:29.042851 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_67131b33-530e-49eb-9e82-cfbe1a05a5f9/nova-cell1-novncproxy-novncproxy/0.log" Dec 16 13:18:29 crc kubenswrapper[4805]: I1216 13:18:29.059256 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ctwkd" podUID="0caee268-c8ae-430c-ab76-f2da8ef2260a" containerName="registry-server" containerID="cri-o://e074cc5b7c73cbf4de01c7022fae2decf24b0efd7dc820a91b4d9add222091d0" gracePeriod=2 Dec 16 13:18:29 crc kubenswrapper[4805]: I1216 13:18:29.218111 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_31340444-d4f0-468b-9acb-ca27b87165a9/nova-api-log/0.log" Dec 16 13:18:29 crc kubenswrapper[4805]: I1216 13:18:29.260677 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_31340444-d4f0-468b-9acb-ca27b87165a9/nova-api-api/0.log" Dec 16 13:18:29 crc kubenswrapper[4805]: I1216 13:18:29.659039 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-mwk4q_128a7ec0-e80f-4147-a459-283405d9c838/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:29 crc kubenswrapper[4805]: I1216 13:18:29.912749 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ba40942d-c8ca-45da-b36d-7e447dac985e/nova-metadata-log/0.log" Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.113175 4805 generic.go:334] "Generic (PLEG): container finished" podID="0caee268-c8ae-430c-ab76-f2da8ef2260a" containerID="e074cc5b7c73cbf4de01c7022fae2decf24b0efd7dc820a91b4d9add222091d0" exitCode=0 Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.113217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctwkd" event={"ID":"0caee268-c8ae-430c-ab76-f2da8ef2260a","Type":"ContainerDied","Data":"e074cc5b7c73cbf4de01c7022fae2decf24b0efd7dc820a91b4d9add222091d0"} Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.191452 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.216662 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-utilities\") pod \"0caee268-c8ae-430c-ab76-f2da8ef2260a\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.216879 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqzbv\" (UniqueName: \"kubernetes.io/projected/0caee268-c8ae-430c-ab76-f2da8ef2260a-kube-api-access-xqzbv\") pod \"0caee268-c8ae-430c-ab76-f2da8ef2260a\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.216922 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-catalog-content\") pod \"0caee268-c8ae-430c-ab76-f2da8ef2260a\" (UID: \"0caee268-c8ae-430c-ab76-f2da8ef2260a\") " Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.217956 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-utilities" (OuterVolumeSpecName: "utilities") pod "0caee268-c8ae-430c-ab76-f2da8ef2260a" (UID: "0caee268-c8ae-430c-ab76-f2da8ef2260a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.233575 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0caee268-c8ae-430c-ab76-f2da8ef2260a-kube-api-access-xqzbv" (OuterVolumeSpecName: "kube-api-access-xqzbv") pod "0caee268-c8ae-430c-ab76-f2da8ef2260a" (UID: "0caee268-c8ae-430c-ab76-f2da8ef2260a"). InnerVolumeSpecName "kube-api-access-xqzbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.302407 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0caee268-c8ae-430c-ab76-f2da8ef2260a" (UID: "0caee268-c8ae-430c-ab76-f2da8ef2260a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.320194 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.320225 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqzbv\" (UniqueName: \"kubernetes.io/projected/0caee268-c8ae-430c-ab76-f2da8ef2260a-kube-api-access-xqzbv\") on node \"crc\" DevicePath \"\"" Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.320234 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0caee268-c8ae-430c-ab76-f2da8ef2260a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.484576 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b009857-5d9e-4d6e-979c-d7fc3357bd66/mysql-bootstrap/0.log" Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.804869 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b009857-5d9e-4d6e-979c-d7fc3357bd66/galera/0.log" Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.843543 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b009857-5d9e-4d6e-979c-d7fc3357bd66/mysql-bootstrap/0.log" Dec 16 13:18:30 crc kubenswrapper[4805]: I1216 13:18:30.867348 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9ff906a7-6277-4d2e-b804-4d8e006cab7d/nova-scheduler-scheduler/0.log" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.123619 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd66837a-9e6f-41fd-91a0-f010e02a3a80/mysql-bootstrap/0.log" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.134681 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctwkd" event={"ID":"0caee268-c8ae-430c-ab76-f2da8ef2260a","Type":"ContainerDied","Data":"5647ca06cd1c1fd156ac28467bc9dfc7bc2d7a163fe819bdeb7bced656ffad2f"} Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.134791 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctwkd" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.134747 4805 scope.go:117] "RemoveContainer" containerID="e074cc5b7c73cbf4de01c7022fae2decf24b0efd7dc820a91b4d9add222091d0" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.164852 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctwkd"] Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.170582 4805 scope.go:117] "RemoveContainer" containerID="46d4fd39a8dd31ad692d010bd61fe3bb8c3f55291a5e3c5d02fdf8f933605f8e" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.186929 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctwkd"] Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.236452 4805 scope.go:117] "RemoveContainer" containerID="97bd41370d1c95e1cf360450b0b66e28a0832f552b73ffac4943bb53f5154331" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.321172 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd66837a-9e6f-41fd-91a0-f010e02a3a80/galera/0.log" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.377579 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd66837a-9e6f-41fd-91a0-f010e02a3a80/mysql-bootstrap/0.log" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.450676 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ba40942d-c8ca-45da-b36d-7e447dac985e/nova-metadata-metadata/0.log" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.552080 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_61335104-9325-4589-bcbb-fb19f4273dc2/openstackclient/0.log" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.707568 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9xw22_f3e4543d-c48b-45a2-8eea-2584d5bba4b6/ovn-controller/0.log" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.718599 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-j76qb_2d3d6086-fcc3-4ba0-af90-445bbcb3ff06/openstack-network-exporter/0.log" Dec 16 13:18:31 crc kubenswrapper[4805]: I1216 13:18:31.874019 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffmtv_b7c2d7ad-f96b-4eaa-b498-46e0739154f1/ovsdb-server-init/0.log" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.134219 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffmtv_b7c2d7ad-f96b-4eaa-b498-46e0739154f1/ovs-vswitchd/0.log" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.163048 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffmtv_b7c2d7ad-f96b-4eaa-b498-46e0739154f1/ovsdb-server-init/0.log" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.187264 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffmtv_b7c2d7ad-f96b-4eaa-b498-46e0739154f1/ovsdb-server/0.log" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.248829 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-n2stl_6cf38957-b778-49fe-9dd0-c629e23fb773/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.402651 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_87517712-55f8-42c7-8a23-cb388090ed3c/openstack-network-exporter/0.log" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.460840 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_87517712-55f8-42c7-8a23-cb388090ed3c/ovn-northd/0.log" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.475975 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_39d101de-9ba7-46dc-830e-3c25397a64d2/openstack-network-exporter/0.log" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.534514 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0caee268-c8ae-430c-ab76-f2da8ef2260a" path="/var/lib/kubelet/pods/0caee268-c8ae-430c-ab76-f2da8ef2260a/volumes" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.619360 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_39d101de-9ba7-46dc-830e-3c25397a64d2/ovsdbserver-nb/0.log" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.664613 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dfbd99cf-6302-48c2-b119-1378b47d7c6d/openstack-network-exporter/0.log" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.741896 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dfbd99cf-6302-48c2-b119-1378b47d7c6d/ovsdbserver-sb/0.log" Dec 16 13:18:32 crc kubenswrapper[4805]: I1216 13:18:32.984999 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09870268-6496-4840-bd93-b9ae456cb54a/setup-container/0.log" Dec 16 13:18:33 crc kubenswrapper[4805]: I1216 13:18:33.039939 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-775868cbc4-vvjnt_2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e/placement-api/0.log" Dec 16 13:18:33 crc kubenswrapper[4805]: I1216 13:18:33.087784 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-775868cbc4-vvjnt_2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e/placement-log/0.log" Dec 16 13:18:33 crc kubenswrapper[4805]: I1216 13:18:33.274736 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09870268-6496-4840-bd93-b9ae456cb54a/rabbitmq/0.log" Dec 16 13:18:33 crc kubenswrapper[4805]: I1216 13:18:33.326873 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_954917f7-4d5d-4dac-9621-f3c281539cf0/setup-container/0.log" Dec 16 13:18:33 crc kubenswrapper[4805]: I1216 13:18:33.331434 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09870268-6496-4840-bd93-b9ae456cb54a/setup-container/0.log" Dec 16 13:18:33 crc kubenswrapper[4805]: I1216 13:18:33.590001 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_954917f7-4d5d-4dac-9621-f3c281539cf0/setup-container/0.log" Dec 16 13:18:33 crc kubenswrapper[4805]: I1216 13:18:33.613106 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4_de881c10-2738-4b4a-9d44-8397ba3fc6b7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:33 crc kubenswrapper[4805]: I1216 13:18:33.619028 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_954917f7-4d5d-4dac-9621-f3c281539cf0/rabbitmq/0.log" Dec 16 13:18:33 crc kubenswrapper[4805]: I1216 13:18:33.851035 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-99pfm_40b28c68-1737-4ad9-a361-43581b880c4b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:33 crc kubenswrapper[4805]: I1216 13:18:33.890116 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2_99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:33 crc kubenswrapper[4805]: I1216 13:18:33.994872 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-s5wgr_3c058268-e16d-417e-8375-014b2cd1d3a5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.170452 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jw42t_2af0c7ba-b7ca-40cd-9443-f9cb126211b0/ssh-known-hosts-edpm-deployment/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.236180 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58c65944b9-fbmdw_c59b441a-f0f3-44c5-a0b8-42f00c60da72/proxy-server/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.342252 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58c65944b9-fbmdw_c59b441a-f0f3-44c5-a0b8-42f00c60da72/proxy-httpd/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.427231 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-xsqmm_09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5/swift-ring-rebalance/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.555050 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/account-auditor/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.635622 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/account-reaper/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.705116 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/account-replicator/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.734150 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/account-server/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.844155 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/container-replicator/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.850295 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/container-auditor/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.875205 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/container-server/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.927507 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/container-updater/0.log" Dec 16 13:18:34 crc kubenswrapper[4805]: I1216 13:18:34.993582 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/object-auditor/0.log" Dec 16 13:18:35 crc kubenswrapper[4805]: I1216 13:18:35.052020 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/object-expirer/0.log" Dec 16 13:18:35 crc kubenswrapper[4805]: I1216 13:18:35.112252 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/object-replicator/0.log" Dec 16 13:18:35 crc kubenswrapper[4805]: I1216 13:18:35.128276 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/object-server/0.log" Dec 16 13:18:35 crc kubenswrapper[4805]: I1216 13:18:35.179403 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/object-updater/0.log" Dec 16 13:18:35 crc kubenswrapper[4805]: I1216 13:18:35.253673 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/rsync/0.log" Dec 16 13:18:35 crc kubenswrapper[4805]: I1216 13:18:35.292003 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/swift-recon-cron/0.log" Dec 16 13:18:35 crc kubenswrapper[4805]: I1216 13:18:35.399193 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6_e4b7d191-e86d-4386-935b-e3ce28794d6d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:35 crc kubenswrapper[4805]: I1216 13:18:35.520715 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_96a2c3a4-408a-4437-9a22-bc7c41f87222/tempest-tests-tempest-tests-runner/0.log" Dec 16 13:18:35 crc kubenswrapper[4805]: I1216 13:18:35.703815 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f_87d9250a-d08e-4cfe-9619-d48ebdb2753c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:18:35 crc kubenswrapper[4805]: I1216 13:18:35.713310 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_5858c1f1-d24f-4e97-85a1-b84b85c6a0ce/test-operator-logs-container/0.log" Dec 16 13:18:57 crc kubenswrapper[4805]: I1216 13:18:57.071811 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:18:57 crc kubenswrapper[4805]: I1216 13:18:57.072475 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:19:04 crc kubenswrapper[4805]: I1216 13:19:04.716811 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/util/0.log" Dec 16 13:19:04 crc kubenswrapper[4805]: I1216 13:19:04.969470 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/pull/0.log" Dec 16 13:19:05 crc kubenswrapper[4805]: I1216 13:19:05.023744 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/util/0.log" Dec 16 13:19:05 crc kubenswrapper[4805]: I1216 13:19:05.050002 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/pull/0.log" Dec 16 13:19:05 crc kubenswrapper[4805]: I1216 13:19:05.195796 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/util/0.log" Dec 16 13:19:05 crc kubenswrapper[4805]: I1216 13:19:05.228321 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/extract/0.log" Dec 16 13:19:05 crc kubenswrapper[4805]: I1216 13:19:05.251440 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/pull/0.log" Dec 16 13:19:05 crc kubenswrapper[4805]: I1216 13:19:05.404688 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-bb565c8dd-5gtrz_81b6ebe6-984f-4ecc-9d75-ac78097f7af2/kube-rbac-proxy/0.log" Dec 16 13:19:05 crc kubenswrapper[4805]: I1216 13:19:05.486932 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-bb565c8dd-5gtrz_81b6ebe6-984f-4ecc-9d75-ac78097f7af2/manager/0.log" Dec 16 13:19:05 crc kubenswrapper[4805]: I1216 13:19:05.533673 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-669b58f65-782cb_f31accc0-70b7-4014-ac71-679dc729ed80/kube-rbac-proxy/0.log" Dec 16 13:19:05 crc kubenswrapper[4805]: I1216 13:19:05.644170 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-669b58f65-782cb_f31accc0-70b7-4014-ac71-679dc729ed80/manager/0.log" Dec 16 13:19:06 crc kubenswrapper[4805]: I1216 13:19:06.209062 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-69977bdf55-t27s6_4a51f724-a3be-4ef5-acd6-84891873147b/kube-rbac-proxy/0.log" Dec 16 13:19:06 crc kubenswrapper[4805]: I1216 13:19:06.254190 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-69977bdf55-t27s6_4a51f724-a3be-4ef5-acd6-84891873147b/manager/0.log" Dec 16 13:19:06 crc kubenswrapper[4805]: I1216 13:19:06.440303 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5847f67c56-pp6mh_b738fc79-2b52-4759-a3ee-72e0946df392/kube-rbac-proxy/0.log" Dec 16 13:19:06 crc kubenswrapper[4805]: I1216 13:19:06.504817 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5847f67c56-pp6mh_b738fc79-2b52-4759-a3ee-72e0946df392/manager/0.log" Dec 16 13:19:06 crc kubenswrapper[4805]: I1216 13:19:06.514681 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-7b45cd6d68-g5msx_37dfd47c-0789-4054-a4c7-37cff4d15b15/kube-rbac-proxy/0.log" Dec 16 13:19:06 crc kubenswrapper[4805]: I1216 13:19:06.670718 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-7b45cd6d68-g5msx_37dfd47c-0789-4054-a4c7-37cff4d15b15/manager/0.log" Dec 16 13:19:06 crc kubenswrapper[4805]: I1216 13:19:06.760185 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6985cf78fb-rfvkc_81719820-96fa-418d-9d0b-18ba90027850/kube-rbac-proxy/0.log" Dec 16 13:19:06 crc kubenswrapper[4805]: I1216 13:19:06.818346 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6985cf78fb-rfvkc_81719820-96fa-418d-9d0b-18ba90027850/manager/0.log" Dec 16 13:19:06 crc kubenswrapper[4805]: I1216 13:19:06.978959 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-85d55b5858-4gk2l_e817150a-4845-4d56-8dd0-229394b946db/kube-rbac-proxy/0.log" Dec 16 13:19:07 crc kubenswrapper[4805]: I1216 13:19:07.108502 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-85d55b5858-4gk2l_e817150a-4845-4d56-8dd0-229394b946db/manager/0.log" Dec 16 13:19:07 crc kubenswrapper[4805]: I1216 13:19:07.204573 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54fd9dc4b5-tszdx_0718fce0-14e8-434b-be98-ef48ec6059f3/kube-rbac-proxy/0.log" Dec 16 13:19:07 crc kubenswrapper[4805]: I1216 13:19:07.225984 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54fd9dc4b5-tszdx_0718fce0-14e8-434b-be98-ef48ec6059f3/manager/0.log" Dec 16 13:19:07 crc kubenswrapper[4805]: I1216 13:19:07.665671 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7f764db9b-hq9vm_ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96/kube-rbac-proxy/0.log" Dec 16 13:19:07 crc kubenswrapper[4805]: I1216 13:19:07.815954 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7f764db9b-hq9vm_ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96/manager/0.log" Dec 16 13:19:07 crc kubenswrapper[4805]: I1216 13:19:07.972301 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7cc599445b-b76nx_0e8bdc0b-046a-4513-9ed6-3350f94faea5/kube-rbac-proxy/0.log" Dec 16 13:19:08 crc kubenswrapper[4805]: I1216 13:19:08.022426 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7cc599445b-b76nx_0e8bdc0b-046a-4513-9ed6-3350f94faea5/manager/0.log" Dec 16 13:19:08 crc kubenswrapper[4805]: I1216 13:19:08.073378 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-xhhsn_93f2c029-57dc-47fc-9c2e-18f2710ff53e/kube-rbac-proxy/0.log" Dec 16 13:19:08 crc kubenswrapper[4805]: I1216 13:19:08.173427 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-xhhsn_93f2c029-57dc-47fc-9c2e-18f2710ff53e/manager/0.log" Dec 16 13:19:08 crc kubenswrapper[4805]: I1216 13:19:08.271700 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-58879495c-pqrdf_57bf5f89-e14d-442f-8064-2c0ca66139c4/kube-rbac-proxy/0.log" Dec 16 13:19:08 crc kubenswrapper[4805]: I1216 13:19:08.362984 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-58879495c-pqrdf_57bf5f89-e14d-442f-8064-2c0ca66139c4/manager/0.log" Dec 16 13:19:08 crc kubenswrapper[4805]: I1216 13:19:08.561603 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b444986fd-5mnhx_b885ab69-dc83-439c-9040-09fc3d238093/kube-rbac-proxy/0.log" Dec 16 13:19:08 crc kubenswrapper[4805]: I1216 13:19:08.602569 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b444986fd-5mnhx_b885ab69-dc83-439c-9040-09fc3d238093/manager/0.log" Dec 16 13:19:08 crc kubenswrapper[4805]: I1216 13:19:08.892612 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-d5fb87cb8-p6wbs_df737798-a34c-4142-88c2-592096b02f85/kube-rbac-proxy/0.log" Dec 16 13:19:09 crc kubenswrapper[4805]: I1216 13:19:09.025531 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk_3b82dc59-a470-4665-8271-3bbcfecb73f1/kube-rbac-proxy/0.log" Dec 16 13:19:09 crc kubenswrapper[4805]: I1216 13:19:09.074337 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-d5fb87cb8-p6wbs_df737798-a34c-4142-88c2-592096b02f85/manager/0.log" Dec 16 13:19:09 crc kubenswrapper[4805]: I1216 13:19:09.102100 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk_3b82dc59-a470-4665-8271-3bbcfecb73f1/manager/0.log" Dec 16 13:19:09 crc kubenswrapper[4805]: I1216 13:19:09.253830 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54798f4d5-64lpb_cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b/kube-rbac-proxy/0.log" Dec 16 13:19:09 crc kubenswrapper[4805]: I1216 13:19:09.449016 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-777d8df86-zk62z_990d476a-8cea-4e1a-8e5f-10fa313d23cb/kube-rbac-proxy/0.log" Dec 16 13:19:09 crc kubenswrapper[4805]: I1216 13:19:09.635047 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-777d8df86-zk62z_990d476a-8cea-4e1a-8e5f-10fa313d23cb/operator/0.log" Dec 16 13:19:09 crc kubenswrapper[4805]: I1216 13:19:09.680968 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4bhsq_42823d10-65ec-407c-93a4-98d27954a5f3/registry-server/0.log" Dec 16 13:19:09 crc kubenswrapper[4805]: I1216 13:19:09.943454 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-l26rt_5581487f-dd20-4fb5-99b7-c6cfb197e548/kube-rbac-proxy/0.log" Dec 16 13:19:09 crc kubenswrapper[4805]: I1216 13:19:09.988122 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-l26rt_5581487f-dd20-4fb5-99b7-c6cfb197e548/manager/0.log" Dec 16 13:19:10 crc kubenswrapper[4805]: I1216 13:19:10.153464 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-cc776f956-smg8x_9b3aad50-49b1-43c0-84c9-15368e69abae/kube-rbac-proxy/0.log" Dec 16 13:19:10 crc kubenswrapper[4805]: I1216 13:19:10.279631 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54798f4d5-64lpb_cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b/manager/0.log" Dec 16 13:19:10 crc kubenswrapper[4805]: I1216 13:19:10.287937 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-cc776f956-smg8x_9b3aad50-49b1-43c0-84c9-15368e69abae/manager/0.log" Dec 16 13:19:10 crc kubenswrapper[4805]: I1216 13:19:10.425019 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn_418e4014-2b81-4b93-a665-ca28d1e1d7ee/operator/0.log" Dec 16 13:19:10 crc kubenswrapper[4805]: I1216 13:19:10.518630 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7c9ff8845d-swkll_bbd2ad8a-7239-4e25-bfbd-a009e826a337/kube-rbac-proxy/0.log" Dec 16 13:19:10 crc kubenswrapper[4805]: I1216 13:19:10.541541 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7c9ff8845d-swkll_bbd2ad8a-7239-4e25-bfbd-a009e826a337/manager/0.log" Dec 16 13:19:10 crc kubenswrapper[4805]: I1216 13:19:10.638735 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6bc5b9c47-tdd6s_f5b07707-f2f4-4664-9522-268f8ee833db/kube-rbac-proxy/0.log" Dec 16 13:19:10 crc kubenswrapper[4805]: I1216 13:19:10.787467 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6bc5b9c47-tdd6s_f5b07707-f2f4-4664-9522-268f8ee833db/manager/0.log" Dec 16 13:19:10 crc kubenswrapper[4805]: I1216 13:19:10.796445 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5d79c6465c-nldwq_68475cd9-8ddd-44c5-ae7e-446bc92bb188/kube-rbac-proxy/0.log" Dec 16 13:19:10 crc kubenswrapper[4805]: I1216 13:19:10.923501 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5d79c6465c-nldwq_68475cd9-8ddd-44c5-ae7e-446bc92bb188/manager/0.log" Dec 16 13:19:10 crc kubenswrapper[4805]: I1216 13:19:10.974795 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-658bc5c8c5-wlr8s_39e02343-d3e2-4e57-b38e-1b275f3cb29d/kube-rbac-proxy/0.log" Dec 16 13:19:11 crc kubenswrapper[4805]: I1216 13:19:11.057484 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-658bc5c8c5-wlr8s_39e02343-d3e2-4e57-b38e-1b275f3cb29d/manager/0.log" Dec 16 13:19:27 crc kubenswrapper[4805]: I1216 13:19:27.071880 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:19:27 crc kubenswrapper[4805]: I1216 13:19:27.073565 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:19:27 crc kubenswrapper[4805]: I1216 13:19:27.073698 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 13:19:27 crc kubenswrapper[4805]: I1216 13:19:27.074477 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 13:19:27 crc kubenswrapper[4805]: I1216 13:19:27.074621 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" gracePeriod=600 Dec 16 13:19:27 crc kubenswrapper[4805]: E1216 13:19:27.194721 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:19:27 crc kubenswrapper[4805]: I1216 13:19:27.650099 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" exitCode=0 Dec 16 13:19:27 crc kubenswrapper[4805]: I1216 13:19:27.650182 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2"} Dec 16 13:19:27 crc kubenswrapper[4805]: I1216 13:19:27.650257 4805 scope.go:117] "RemoveContainer" containerID="f4fff8ab81a79c6040aa17493152ea9972a4d9da7b64f4dc0cd2a4c74183132a" Dec 16 13:19:27 crc kubenswrapper[4805]: I1216 13:19:27.650983 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:19:27 crc kubenswrapper[4805]: E1216 13:19:27.651252 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:19:29 crc kubenswrapper[4805]: I1216 13:19:29.047282 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9jccm_df2558d5-6ce0-4fb0-b689-fc8682a89744/control-plane-machine-set-operator/0.log" Dec 16 13:19:29 crc kubenswrapper[4805]: I1216 13:19:29.255602 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q7lmj_9a656193-7884-4e3d-8a17-4ff680c4a116/kube-rbac-proxy/0.log" Dec 16 13:19:29 crc kubenswrapper[4805]: I1216 13:19:29.300522 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q7lmj_9a656193-7884-4e3d-8a17-4ff680c4a116/machine-api-operator/0.log" Dec 16 13:19:41 crc kubenswrapper[4805]: I1216 13:19:41.523596 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:19:41 crc kubenswrapper[4805]: E1216 13:19:41.524233 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:19:44 crc kubenswrapper[4805]: I1216 13:19:44.099826 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-6wgm7_31df596c-e28f-4424-8e69-09cadc77cd6d/cert-manager-controller/0.log" Dec 16 13:19:44 crc kubenswrapper[4805]: I1216 13:19:44.297526 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-mbkms_5a731837-c343-49f3-8bd9-26b04af9b2d0/cert-manager-cainjector/0.log" Dec 16 13:19:44 crc kubenswrapper[4805]: I1216 13:19:44.360756 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-9m9jg_dd1457ce-6da4-4b68-9bb5-7c57738c0ace/cert-manager-webhook/0.log" Dec 16 13:19:54 crc kubenswrapper[4805]: I1216 13:19:54.523025 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:19:54 crc kubenswrapper[4805]: E1216 13:19:54.523721 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:19:57 crc kubenswrapper[4805]: I1216 13:19:57.460836 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-8v4hk_8fbc1c7a-286e-4428-a745-32211779781e/nmstate-console-plugin/0.log" Dec 16 13:19:57 crc kubenswrapper[4805]: I1216 13:19:57.739854 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lxsrn_ff660e50-710f-494e-aa58-66abf3868df5/nmstate-handler/0.log" Dec 16 13:19:57 crc kubenswrapper[4805]: I1216 13:19:57.850439 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-z2kdh_31cb3421-4893-45b4-bb8d-8afd77fe9cb2/kube-rbac-proxy/0.log" Dec 16 13:19:58 crc kubenswrapper[4805]: I1216 13:19:58.313447 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-z2kdh_31cb3421-4893-45b4-bb8d-8afd77fe9cb2/nmstate-metrics/0.log" Dec 16 13:19:58 crc kubenswrapper[4805]: I1216 13:19:58.395365 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-dvv9t_4bc64030-6cd9-48cb-8665-3424d3f6897c/nmstate-operator/0.log" Dec 16 13:19:58 crc kubenswrapper[4805]: I1216 13:19:58.542365 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-m8h9v_212a3b56-a221-4818-a463-90cc9e4e46e5/nmstate-webhook/0.log" Dec 16 13:20:07 crc kubenswrapper[4805]: I1216 13:20:07.523081 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:20:07 crc kubenswrapper[4805]: E1216 13:20:07.523890 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.390807 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pgw9n"] Dec 16 13:20:10 crc kubenswrapper[4805]: E1216 13:20:10.391560 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5003a38e-6925-4bb0-8ff0-d9f35609e352" containerName="registry-server" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.391577 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5003a38e-6925-4bb0-8ff0-d9f35609e352" containerName="registry-server" Dec 16 13:20:10 crc kubenswrapper[4805]: E1216 13:20:10.391592 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0caee268-c8ae-430c-ab76-f2da8ef2260a" containerName="extract-utilities" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.391598 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0caee268-c8ae-430c-ab76-f2da8ef2260a" containerName="extract-utilities" Dec 16 13:20:10 crc kubenswrapper[4805]: E1216 13:20:10.391620 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5003a38e-6925-4bb0-8ff0-d9f35609e352" containerName="extract-content" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.391626 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5003a38e-6925-4bb0-8ff0-d9f35609e352" containerName="extract-content" Dec 16 13:20:10 crc kubenswrapper[4805]: E1216 13:20:10.391639 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5003a38e-6925-4bb0-8ff0-d9f35609e352" containerName="extract-utilities" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.391646 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5003a38e-6925-4bb0-8ff0-d9f35609e352" containerName="extract-utilities" Dec 16 13:20:10 crc kubenswrapper[4805]: E1216 13:20:10.391658 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0caee268-c8ae-430c-ab76-f2da8ef2260a" containerName="extract-content" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.391665 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0caee268-c8ae-430c-ab76-f2da8ef2260a" containerName="extract-content" Dec 16 13:20:10 crc kubenswrapper[4805]: E1216 13:20:10.391678 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0caee268-c8ae-430c-ab76-f2da8ef2260a" containerName="registry-server" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.391684 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0caee268-c8ae-430c-ab76-f2da8ef2260a" containerName="registry-server" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.391890 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0caee268-c8ae-430c-ab76-f2da8ef2260a" containerName="registry-server" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.391905 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5003a38e-6925-4bb0-8ff0-d9f35609e352" containerName="registry-server" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.400989 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.406549 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pgw9n"] Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.558990 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-catalog-content\") pod \"certified-operators-pgw9n\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.559460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq6t7\" (UniqueName: \"kubernetes.io/projected/bde91ab2-26b5-4442-b790-129bbf13d8fb-kube-api-access-bq6t7\") pod \"certified-operators-pgw9n\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.559503 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-utilities\") pod \"certified-operators-pgw9n\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.662212 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq6t7\" (UniqueName: \"kubernetes.io/projected/bde91ab2-26b5-4442-b790-129bbf13d8fb-kube-api-access-bq6t7\") pod \"certified-operators-pgw9n\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.662859 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-utilities\") pod \"certified-operators-pgw9n\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.663359 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-utilities\") pod \"certified-operators-pgw9n\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.663703 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-catalog-content\") pod \"certified-operators-pgw9n\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.664232 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-catalog-content\") pod \"certified-operators-pgw9n\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.689122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq6t7\" (UniqueName: \"kubernetes.io/projected/bde91ab2-26b5-4442-b790-129bbf13d8fb-kube-api-access-bq6t7\") pod \"certified-operators-pgw9n\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:10 crc kubenswrapper[4805]: I1216 13:20:10.721980 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:11 crc kubenswrapper[4805]: I1216 13:20:11.274409 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pgw9n"] Dec 16 13:20:12 crc kubenswrapper[4805]: I1216 13:20:12.097034 4805 generic.go:334] "Generic (PLEG): container finished" podID="bde91ab2-26b5-4442-b790-129bbf13d8fb" containerID="11c6fc4494fe9f3e6b4e66696de14f7eda57512218679a84d76a8ade86aa81e9" exitCode=0 Dec 16 13:20:12 crc kubenswrapper[4805]: I1216 13:20:12.097355 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgw9n" event={"ID":"bde91ab2-26b5-4442-b790-129bbf13d8fb","Type":"ContainerDied","Data":"11c6fc4494fe9f3e6b4e66696de14f7eda57512218679a84d76a8ade86aa81e9"} Dec 16 13:20:12 crc kubenswrapper[4805]: I1216 13:20:12.097384 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgw9n" event={"ID":"bde91ab2-26b5-4442-b790-129bbf13d8fb","Type":"ContainerStarted","Data":"b4537eb596e21f32589343484426a30b0af1991893a4fa14fb89201cc97a2992"} Dec 16 13:20:13 crc kubenswrapper[4805]: I1216 13:20:13.125897 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgw9n" event={"ID":"bde91ab2-26b5-4442-b790-129bbf13d8fb","Type":"ContainerStarted","Data":"e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757"} Dec 16 13:20:14 crc kubenswrapper[4805]: I1216 13:20:14.137768 4805 generic.go:334] "Generic (PLEG): container finished" podID="bde91ab2-26b5-4442-b790-129bbf13d8fb" containerID="e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757" exitCode=0 Dec 16 13:20:14 crc kubenswrapper[4805]: I1216 13:20:14.138479 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgw9n" event={"ID":"bde91ab2-26b5-4442-b790-129bbf13d8fb","Type":"ContainerDied","Data":"e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757"} Dec 16 13:20:15 crc kubenswrapper[4805]: I1216 13:20:15.167870 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-mw2xw_601c7e3b-b663-4780-bb6a-59bc7e4d510d/kube-rbac-proxy/0.log" Dec 16 13:20:15 crc kubenswrapper[4805]: I1216 13:20:15.390389 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-mw2xw_601c7e3b-b663-4780-bb6a-59bc7e4d510d/controller/0.log" Dec 16 13:20:15 crc kubenswrapper[4805]: I1216 13:20:15.489293 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-frr-files/0.log" Dec 16 13:20:16 crc kubenswrapper[4805]: I1216 13:20:16.167470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgw9n" event={"ID":"bde91ab2-26b5-4442-b790-129bbf13d8fb","Type":"ContainerStarted","Data":"441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be"} Dec 16 13:20:16 crc kubenswrapper[4805]: I1216 13:20:16.201802 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pgw9n" podStartSLOduration=3.149506297 podStartE2EDuration="6.201779356s" podCreationTimestamp="2025-12-16 13:20:10 +0000 UTC" firstStartedPulling="2025-12-16 13:20:12.099013927 +0000 UTC m=+5085.817271733" lastFinishedPulling="2025-12-16 13:20:15.151286987 +0000 UTC m=+5088.869544792" observedRunningTime="2025-12-16 13:20:16.192680356 +0000 UTC m=+5089.910938161" watchObservedRunningTime="2025-12-16 13:20:16.201779356 +0000 UTC m=+5089.920037171" Dec 16 13:20:16 crc kubenswrapper[4805]: I1216 13:20:16.252893 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-reloader/0.log" Dec 16 13:20:16 crc kubenswrapper[4805]: I1216 13:20:16.259062 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-metrics/0.log" Dec 16 13:20:16 crc kubenswrapper[4805]: I1216 13:20:16.280484 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-frr-files/0.log" Dec 16 13:20:16 crc kubenswrapper[4805]: I1216 13:20:16.353481 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-reloader/0.log" Dec 16 13:20:16 crc kubenswrapper[4805]: I1216 13:20:16.566650 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-reloader/0.log" Dec 16 13:20:16 crc kubenswrapper[4805]: I1216 13:20:16.591750 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-frr-files/0.log" Dec 16 13:20:16 crc kubenswrapper[4805]: I1216 13:20:16.591864 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-metrics/0.log" Dec 16 13:20:16 crc kubenswrapper[4805]: I1216 13:20:16.644338 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-metrics/0.log" Dec 16 13:20:16 crc kubenswrapper[4805]: I1216 13:20:16.922482 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-frr-files/0.log" Dec 16 13:20:17 crc kubenswrapper[4805]: I1216 13:20:17.054488 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/controller/0.log" Dec 16 13:20:17 crc kubenswrapper[4805]: I1216 13:20:17.057345 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-reloader/0.log" Dec 16 13:20:17 crc kubenswrapper[4805]: I1216 13:20:17.094669 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-metrics/0.log" Dec 16 13:20:17 crc kubenswrapper[4805]: I1216 13:20:17.295747 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/kube-rbac-proxy/0.log" Dec 16 13:20:17 crc kubenswrapper[4805]: I1216 13:20:17.347900 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/frr-metrics/0.log" Dec 16 13:20:17 crc kubenswrapper[4805]: I1216 13:20:17.396645 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/kube-rbac-proxy-frr/0.log" Dec 16 13:20:18 crc kubenswrapper[4805]: I1216 13:20:18.024519 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/reloader/0.log" Dec 16 13:20:18 crc kubenswrapper[4805]: I1216 13:20:18.083881 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-zqndj_a7792c7c-66d6-4c58-ba5f-09ddbb883c20/frr-k8s-webhook-server/0.log" Dec 16 13:20:18 crc kubenswrapper[4805]: I1216 13:20:18.523994 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:20:18 crc kubenswrapper[4805]: E1216 13:20:18.524269 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:20:18 crc kubenswrapper[4805]: I1216 13:20:18.540331 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-596ddb8dd6-lh97f_e7312814-4835-454a-b7c4-5036ce21ef36/manager/0.log" Dec 16 13:20:18 crc kubenswrapper[4805]: I1216 13:20:18.642386 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-74df5c45cc-dv7cj_1f6983ab-d0c5-4431-878f-86c4d91d6720/webhook-server/0.log" Dec 16 13:20:19 crc kubenswrapper[4805]: I1216 13:20:19.911704 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zjknk_89665fca-66d5-4ff5-98d9-e49065febb40/kube-rbac-proxy/0.log" Dec 16 13:20:19 crc kubenswrapper[4805]: I1216 13:20:19.968549 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/frr/0.log" Dec 16 13:20:20 crc kubenswrapper[4805]: I1216 13:20:20.721937 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:20 crc kubenswrapper[4805]: I1216 13:20:20.722287 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:20 crc kubenswrapper[4805]: I1216 13:20:20.822896 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zjknk_89665fca-66d5-4ff5-98d9-e49065febb40/speaker/0.log" Dec 16 13:20:21 crc kubenswrapper[4805]: I1216 13:20:21.184813 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:21 crc kubenswrapper[4805]: I1216 13:20:21.297992 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:21 crc kubenswrapper[4805]: I1216 13:20:21.575933 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pgw9n"] Dec 16 13:20:23 crc kubenswrapper[4805]: I1216 13:20:23.256021 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pgw9n" podUID="bde91ab2-26b5-4442-b790-129bbf13d8fb" containerName="registry-server" containerID="cri-o://441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be" gracePeriod=2 Dec 16 13:20:23 crc kubenswrapper[4805]: I1216 13:20:23.775941 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:23 crc kubenswrapper[4805]: I1216 13:20:23.828630 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq6t7\" (UniqueName: \"kubernetes.io/projected/bde91ab2-26b5-4442-b790-129bbf13d8fb-kube-api-access-bq6t7\") pod \"bde91ab2-26b5-4442-b790-129bbf13d8fb\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " Dec 16 13:20:23 crc kubenswrapper[4805]: I1216 13:20:23.828757 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-catalog-content\") pod \"bde91ab2-26b5-4442-b790-129bbf13d8fb\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " Dec 16 13:20:23 crc kubenswrapper[4805]: I1216 13:20:23.828848 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-utilities\") pod \"bde91ab2-26b5-4442-b790-129bbf13d8fb\" (UID: \"bde91ab2-26b5-4442-b790-129bbf13d8fb\") " Dec 16 13:20:23 crc kubenswrapper[4805]: I1216 13:20:23.829362 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-utilities" (OuterVolumeSpecName: "utilities") pod "bde91ab2-26b5-4442-b790-129bbf13d8fb" (UID: "bde91ab2-26b5-4442-b790-129bbf13d8fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:20:23 crc kubenswrapper[4805]: I1216 13:20:23.838443 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde91ab2-26b5-4442-b790-129bbf13d8fb-kube-api-access-bq6t7" (OuterVolumeSpecName: "kube-api-access-bq6t7") pod "bde91ab2-26b5-4442-b790-129bbf13d8fb" (UID: "bde91ab2-26b5-4442-b790-129bbf13d8fb"). InnerVolumeSpecName "kube-api-access-bq6t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:20:23 crc kubenswrapper[4805]: I1216 13:20:23.894762 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bde91ab2-26b5-4442-b790-129bbf13d8fb" (UID: "bde91ab2-26b5-4442-b790-129bbf13d8fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:20:23 crc kubenswrapper[4805]: I1216 13:20:23.931256 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:20:23 crc kubenswrapper[4805]: I1216 13:20:23.931295 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde91ab2-26b5-4442-b790-129bbf13d8fb-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:20:23 crc kubenswrapper[4805]: I1216 13:20:23.931307 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq6t7\" (UniqueName: \"kubernetes.io/projected/bde91ab2-26b5-4442-b790-129bbf13d8fb-kube-api-access-bq6t7\") on node \"crc\" DevicePath \"\"" Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.266848 4805 generic.go:334] "Generic (PLEG): container finished" podID="bde91ab2-26b5-4442-b790-129bbf13d8fb" containerID="441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be" exitCode=0 Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.266888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgw9n" event={"ID":"bde91ab2-26b5-4442-b790-129bbf13d8fb","Type":"ContainerDied","Data":"441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be"} Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.266910 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgw9n" Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.266926 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgw9n" event={"ID":"bde91ab2-26b5-4442-b790-129bbf13d8fb","Type":"ContainerDied","Data":"b4537eb596e21f32589343484426a30b0af1991893a4fa14fb89201cc97a2992"} Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.266946 4805 scope.go:117] "RemoveContainer" containerID="441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be" Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.288774 4805 scope.go:117] "RemoveContainer" containerID="e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757" Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.308312 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pgw9n"] Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.323556 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pgw9n"] Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.337052 4805 scope.go:117] "RemoveContainer" containerID="11c6fc4494fe9f3e6b4e66696de14f7eda57512218679a84d76a8ade86aa81e9" Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.394297 4805 scope.go:117] "RemoveContainer" containerID="441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be" Dec 16 13:20:24 crc kubenswrapper[4805]: E1216 13:20:24.395393 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be\": container with ID starting with 441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be not found: ID does not exist" containerID="441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be" Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.395449 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be"} err="failed to get container status \"441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be\": rpc error: code = NotFound desc = could not find container \"441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be\": container with ID starting with 441c404331fea4f19abc1b0df26fe786e744636108705710bdc532cd25a096be not found: ID does not exist" Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.395520 4805 scope.go:117] "RemoveContainer" containerID="e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757" Dec 16 13:20:24 crc kubenswrapper[4805]: E1216 13:20:24.395935 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757\": container with ID starting with e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757 not found: ID does not exist" containerID="e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757" Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.395967 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757"} err="failed to get container status \"e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757\": rpc error: code = NotFound desc = could not find container \"e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757\": container with ID starting with e139978ed3c9986ebb7cf31d93ebd6b132abea07a3bfd980a27c0b225d029757 not found: ID does not exist" Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.395989 4805 scope.go:117] "RemoveContainer" containerID="11c6fc4494fe9f3e6b4e66696de14f7eda57512218679a84d76a8ade86aa81e9" Dec 16 13:20:24 crc kubenswrapper[4805]: E1216 13:20:24.396349 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11c6fc4494fe9f3e6b4e66696de14f7eda57512218679a84d76a8ade86aa81e9\": container with ID starting with 11c6fc4494fe9f3e6b4e66696de14f7eda57512218679a84d76a8ade86aa81e9 not found: ID does not exist" containerID="11c6fc4494fe9f3e6b4e66696de14f7eda57512218679a84d76a8ade86aa81e9" Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.396377 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11c6fc4494fe9f3e6b4e66696de14f7eda57512218679a84d76a8ade86aa81e9"} err="failed to get container status \"11c6fc4494fe9f3e6b4e66696de14f7eda57512218679a84d76a8ade86aa81e9\": rpc error: code = NotFound desc = could not find container \"11c6fc4494fe9f3e6b4e66696de14f7eda57512218679a84d76a8ade86aa81e9\": container with ID starting with 11c6fc4494fe9f3e6b4e66696de14f7eda57512218679a84d76a8ade86aa81e9 not found: ID does not exist" Dec 16 13:20:24 crc kubenswrapper[4805]: I1216 13:20:24.538694 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde91ab2-26b5-4442-b790-129bbf13d8fb" path="/var/lib/kubelet/pods/bde91ab2-26b5-4442-b790-129bbf13d8fb/volumes" Dec 16 13:20:29 crc kubenswrapper[4805]: I1216 13:20:29.523001 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:20:29 crc kubenswrapper[4805]: E1216 13:20:29.523702 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:20:32 crc kubenswrapper[4805]: I1216 13:20:32.872232 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/util/0.log" Dec 16 13:20:33 crc kubenswrapper[4805]: I1216 13:20:33.328068 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/pull/0.log" Dec 16 13:20:33 crc kubenswrapper[4805]: I1216 13:20:33.336725 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/pull/0.log" Dec 16 13:20:33 crc kubenswrapper[4805]: I1216 13:20:33.387185 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/util/0.log" Dec 16 13:20:33 crc kubenswrapper[4805]: I1216 13:20:33.547179 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/util/0.log" Dec 16 13:20:33 crc kubenswrapper[4805]: I1216 13:20:33.579477 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/extract/0.log" Dec 16 13:20:33 crc kubenswrapper[4805]: I1216 13:20:33.596263 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/pull/0.log" Dec 16 13:20:33 crc kubenswrapper[4805]: I1216 13:20:33.792236 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/util/0.log" Dec 16 13:20:33 crc kubenswrapper[4805]: I1216 13:20:33.960393 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/util/0.log" Dec 16 13:20:33 crc kubenswrapper[4805]: I1216 13:20:33.994476 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/pull/0.log" Dec 16 13:20:34 crc kubenswrapper[4805]: I1216 13:20:34.025987 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/pull/0.log" Dec 16 13:20:34 crc kubenswrapper[4805]: I1216 13:20:34.144218 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/util/0.log" Dec 16 13:20:34 crc kubenswrapper[4805]: I1216 13:20:34.194649 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/pull/0.log" Dec 16 13:20:34 crc kubenswrapper[4805]: I1216 13:20:34.203247 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/extract/0.log" Dec 16 13:20:34 crc kubenswrapper[4805]: I1216 13:20:34.365597 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-utilities/0.log" Dec 16 13:20:34 crc kubenswrapper[4805]: I1216 13:20:34.604216 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-utilities/0.log" Dec 16 13:20:34 crc kubenswrapper[4805]: I1216 13:20:34.624821 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-content/0.log" Dec 16 13:20:34 crc kubenswrapper[4805]: I1216 13:20:34.654574 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-content/0.log" Dec 16 13:20:34 crc kubenswrapper[4805]: I1216 13:20:34.793681 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-utilities/0.log" Dec 16 13:20:34 crc kubenswrapper[4805]: I1216 13:20:34.844132 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-content/0.log" Dec 16 13:20:35 crc kubenswrapper[4805]: I1216 13:20:35.091665 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-utilities/0.log" Dec 16 13:20:35 crc kubenswrapper[4805]: I1216 13:20:35.390604 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/registry-server/0.log" Dec 16 13:20:35 crc kubenswrapper[4805]: I1216 13:20:35.472464 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-utilities/0.log" Dec 16 13:20:35 crc kubenswrapper[4805]: I1216 13:20:35.500542 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-content/0.log" Dec 16 13:20:35 crc kubenswrapper[4805]: I1216 13:20:35.558121 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-content/0.log" Dec 16 13:20:35 crc kubenswrapper[4805]: I1216 13:20:35.752787 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-utilities/0.log" Dec 16 13:20:35 crc kubenswrapper[4805]: I1216 13:20:35.777246 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-content/0.log" Dec 16 13:20:36 crc kubenswrapper[4805]: I1216 13:20:36.544095 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/registry-server/0.log" Dec 16 13:20:36 crc kubenswrapper[4805]: I1216 13:20:36.980238 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-g2w5p_ec1e6ca1-a29e-4572-8326-f4119b22b30a/marketplace-operator/0.log" Dec 16 13:20:37 crc kubenswrapper[4805]: I1216 13:20:37.002990 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-utilities/0.log" Dec 16 13:20:37 crc kubenswrapper[4805]: I1216 13:20:37.176855 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-utilities/0.log" Dec 16 13:20:37 crc kubenswrapper[4805]: I1216 13:20:37.184593 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-content/0.log" Dec 16 13:20:37 crc kubenswrapper[4805]: I1216 13:20:37.310241 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-content/0.log" Dec 16 13:20:37 crc kubenswrapper[4805]: I1216 13:20:37.451285 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-utilities/0.log" Dec 16 13:20:37 crc kubenswrapper[4805]: I1216 13:20:37.497312 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-content/0.log" Dec 16 13:20:37 crc kubenswrapper[4805]: I1216 13:20:37.638154 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-utilities/0.log" Dec 16 13:20:37 crc kubenswrapper[4805]: I1216 13:20:37.642802 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/registry-server/0.log" Dec 16 13:20:37 crc kubenswrapper[4805]: I1216 13:20:37.814785 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-utilities/0.log" Dec 16 13:20:38 crc kubenswrapper[4805]: I1216 13:20:38.459378 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-content/0.log" Dec 16 13:20:38 crc kubenswrapper[4805]: I1216 13:20:38.515204 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-content/0.log" Dec 16 13:20:38 crc kubenswrapper[4805]: I1216 13:20:38.657876 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-utilities/0.log" Dec 16 13:20:38 crc kubenswrapper[4805]: I1216 13:20:38.693954 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-content/0.log" Dec 16 13:20:39 crc kubenswrapper[4805]: I1216 13:20:39.247157 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/registry-server/0.log" Dec 16 13:20:43 crc kubenswrapper[4805]: I1216 13:20:43.522782 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:20:43 crc kubenswrapper[4805]: E1216 13:20:43.523633 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:20:56 crc kubenswrapper[4805]: I1216 13:20:56.530120 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:20:56 crc kubenswrapper[4805]: E1216 13:20:56.530955 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:21:11 crc kubenswrapper[4805]: I1216 13:21:11.522726 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:21:11 crc kubenswrapper[4805]: E1216 13:21:11.523686 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:21:26 crc kubenswrapper[4805]: I1216 13:21:26.537534 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:21:26 crc kubenswrapper[4805]: E1216 13:21:26.538494 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:21:40 crc kubenswrapper[4805]: I1216 13:21:40.522900 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:21:40 crc kubenswrapper[4805]: E1216 13:21:40.523730 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:21:42 crc kubenswrapper[4805]: I1216 13:21:42.813005 4805 trace.go:236] Trace[1108371197]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-tpb8r" (16-Dec-2025 13:21:40.709) (total time: 2103ms): Dec 16 13:21:42 crc kubenswrapper[4805]: Trace[1108371197]: [2.103272084s] [2.103272084s] END Dec 16 13:21:53 crc kubenswrapper[4805]: I1216 13:21:53.523006 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:21:53 crc kubenswrapper[4805]: E1216 13:21:53.523739 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:22:07 crc kubenswrapper[4805]: I1216 13:22:07.522504 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:22:07 crc kubenswrapper[4805]: E1216 13:22:07.523175 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:22:19 crc kubenswrapper[4805]: I1216 13:22:19.522101 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:22:19 crc kubenswrapper[4805]: E1216 13:22:19.522924 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:22:32 crc kubenswrapper[4805]: I1216 13:22:32.522552 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:22:32 crc kubenswrapper[4805]: E1216 13:22:32.523226 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:22:47 crc kubenswrapper[4805]: I1216 13:22:47.524288 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:22:47 crc kubenswrapper[4805]: E1216 13:22:47.527064 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:22:58 crc kubenswrapper[4805]: I1216 13:22:58.522985 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:22:58 crc kubenswrapper[4805]: E1216 13:22:58.523707 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:23:01 crc kubenswrapper[4805]: I1216 13:23:01.681443 4805 scope.go:117] "RemoveContainer" containerID="91af1edf2fbc6faaf92f46b3963390491e551c85fdbf36a578efb2dbe07d79bf" Dec 16 13:23:09 crc kubenswrapper[4805]: I1216 13:23:09.523365 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:23:09 crc kubenswrapper[4805]: E1216 13:23:09.524201 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:23:16 crc kubenswrapper[4805]: I1216 13:23:16.096638 4805 generic.go:334] "Generic (PLEG): container finished" podID="42a345f5-d310-4907-8fe5-be3b705c774d" containerID="6956c477298e37297736ed06f3b1feb79ea841d83cdb0fbdd981698e338e24ef" exitCode=0 Dec 16 13:23:16 crc kubenswrapper[4805]: I1216 13:23:16.097101 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dbrmg/must-gather-dtw2n" event={"ID":"42a345f5-d310-4907-8fe5-be3b705c774d","Type":"ContainerDied","Data":"6956c477298e37297736ed06f3b1feb79ea841d83cdb0fbdd981698e338e24ef"} Dec 16 13:23:16 crc kubenswrapper[4805]: I1216 13:23:16.099123 4805 scope.go:117] "RemoveContainer" containerID="6956c477298e37297736ed06f3b1feb79ea841d83cdb0fbdd981698e338e24ef" Dec 16 13:23:16 crc kubenswrapper[4805]: I1216 13:23:16.320521 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dbrmg_must-gather-dtw2n_42a345f5-d310-4907-8fe5-be3b705c774d/gather/0.log" Dec 16 13:23:24 crc kubenswrapper[4805]: I1216 13:23:24.524283 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:23:24 crc kubenswrapper[4805]: E1216 13:23:24.525216 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:23:24 crc kubenswrapper[4805]: I1216 13:23:24.916557 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dbrmg/must-gather-dtw2n"] Dec 16 13:23:24 crc kubenswrapper[4805]: I1216 13:23:24.916885 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dbrmg/must-gather-dtw2n" podUID="42a345f5-d310-4907-8fe5-be3b705c774d" containerName="copy" containerID="cri-o://c3e69f450bc782ee8c391eca9dfe352918256dd92925c190725eb8d7a9ca51c6" gracePeriod=2 Dec 16 13:23:24 crc kubenswrapper[4805]: I1216 13:23:24.928844 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dbrmg/must-gather-dtw2n"] Dec 16 13:23:25 crc kubenswrapper[4805]: I1216 13:23:25.201825 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dbrmg_must-gather-dtw2n_42a345f5-d310-4907-8fe5-be3b705c774d/copy/0.log" Dec 16 13:23:25 crc kubenswrapper[4805]: I1216 13:23:25.202763 4805 generic.go:334] "Generic (PLEG): container finished" podID="42a345f5-d310-4907-8fe5-be3b705c774d" containerID="c3e69f450bc782ee8c391eca9dfe352918256dd92925c190725eb8d7a9ca51c6" exitCode=143 Dec 16 13:23:25 crc kubenswrapper[4805]: I1216 13:23:25.373400 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dbrmg_must-gather-dtw2n_42a345f5-d310-4907-8fe5-be3b705c774d/copy/0.log" Dec 16 13:23:25 crc kubenswrapper[4805]: I1216 13:23:25.373958 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/must-gather-dtw2n" Dec 16 13:23:25 crc kubenswrapper[4805]: I1216 13:23:25.498162 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhthc\" (UniqueName: \"kubernetes.io/projected/42a345f5-d310-4907-8fe5-be3b705c774d-kube-api-access-jhthc\") pod \"42a345f5-d310-4907-8fe5-be3b705c774d\" (UID: \"42a345f5-d310-4907-8fe5-be3b705c774d\") " Dec 16 13:23:25 crc kubenswrapper[4805]: I1216 13:23:25.498268 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42a345f5-d310-4907-8fe5-be3b705c774d-must-gather-output\") pod \"42a345f5-d310-4907-8fe5-be3b705c774d\" (UID: \"42a345f5-d310-4907-8fe5-be3b705c774d\") " Dec 16 13:23:25 crc kubenswrapper[4805]: I1216 13:23:25.507247 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a345f5-d310-4907-8fe5-be3b705c774d-kube-api-access-jhthc" (OuterVolumeSpecName: "kube-api-access-jhthc") pod "42a345f5-d310-4907-8fe5-be3b705c774d" (UID: "42a345f5-d310-4907-8fe5-be3b705c774d"). InnerVolumeSpecName "kube-api-access-jhthc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:23:25 crc kubenswrapper[4805]: I1216 13:23:25.600694 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhthc\" (UniqueName: \"kubernetes.io/projected/42a345f5-d310-4907-8fe5-be3b705c774d-kube-api-access-jhthc\") on node \"crc\" DevicePath \"\"" Dec 16 13:23:25 crc kubenswrapper[4805]: I1216 13:23:25.854581 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a345f5-d310-4907-8fe5-be3b705c774d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "42a345f5-d310-4907-8fe5-be3b705c774d" (UID: "42a345f5-d310-4907-8fe5-be3b705c774d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:23:25 crc kubenswrapper[4805]: I1216 13:23:25.874657 4805 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42a345f5-d310-4907-8fe5-be3b705c774d-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 16 13:23:26 crc kubenswrapper[4805]: I1216 13:23:26.214099 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dbrmg_must-gather-dtw2n_42a345f5-d310-4907-8fe5-be3b705c774d/copy/0.log" Dec 16 13:23:26 crc kubenswrapper[4805]: I1216 13:23:26.214545 4805 scope.go:117] "RemoveContainer" containerID="c3e69f450bc782ee8c391eca9dfe352918256dd92925c190725eb8d7a9ca51c6" Dec 16 13:23:26 crc kubenswrapper[4805]: I1216 13:23:26.214612 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dbrmg/must-gather-dtw2n" Dec 16 13:23:26 crc kubenswrapper[4805]: I1216 13:23:26.242469 4805 scope.go:117] "RemoveContainer" containerID="6956c477298e37297736ed06f3b1feb79ea841d83cdb0fbdd981698e338e24ef" Dec 16 13:23:26 crc kubenswrapper[4805]: I1216 13:23:26.536756 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a345f5-d310-4907-8fe5-be3b705c774d" path="/var/lib/kubelet/pods/42a345f5-d310-4907-8fe5-be3b705c774d/volumes" Dec 16 13:23:37 crc kubenswrapper[4805]: I1216 13:23:37.522698 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:23:37 crc kubenswrapper[4805]: E1216 13:23:37.523476 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:23:52 crc kubenswrapper[4805]: I1216 13:23:52.523230 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:23:52 crc kubenswrapper[4805]: E1216 13:23:52.524267 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:24:01 crc kubenswrapper[4805]: I1216 13:24:01.750566 4805 scope.go:117] "RemoveContainer" containerID="dd054bcdce08e68832e4fd2aae1fde67328320e9e9e841773bf77ad3a4fb018d" Dec 16 13:24:06 crc kubenswrapper[4805]: I1216 13:24:06.529028 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:24:06 crc kubenswrapper[4805]: E1216 13:24:06.529923 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:24:19 crc kubenswrapper[4805]: I1216 13:24:19.522887 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:24:19 crc kubenswrapper[4805]: E1216 13:24:19.523847 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:24:31 crc kubenswrapper[4805]: I1216 13:24:31.523092 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:24:31 crc kubenswrapper[4805]: I1216 13:24:31.838676 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"5e40366d476e3b90378575a27eb523bfc7438df121ddfab745128aced0f99f31"} Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.110541 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zmb7m"] Dec 16 13:25:54 crc kubenswrapper[4805]: E1216 13:25:54.111637 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a345f5-d310-4907-8fe5-be3b705c774d" containerName="gather" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.111677 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a345f5-d310-4907-8fe5-be3b705c774d" containerName="gather" Dec 16 13:25:54 crc kubenswrapper[4805]: E1216 13:25:54.111710 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde91ab2-26b5-4442-b790-129bbf13d8fb" containerName="extract-content" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.111716 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde91ab2-26b5-4442-b790-129bbf13d8fb" containerName="extract-content" Dec 16 13:25:54 crc kubenswrapper[4805]: E1216 13:25:54.111738 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde91ab2-26b5-4442-b790-129bbf13d8fb" containerName="extract-utilities" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.111747 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde91ab2-26b5-4442-b790-129bbf13d8fb" containerName="extract-utilities" Dec 16 13:25:54 crc kubenswrapper[4805]: E1216 13:25:54.111760 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a345f5-d310-4907-8fe5-be3b705c774d" containerName="copy" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.111765 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a345f5-d310-4907-8fe5-be3b705c774d" containerName="copy" Dec 16 13:25:54 crc kubenswrapper[4805]: E1216 13:25:54.111781 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde91ab2-26b5-4442-b790-129bbf13d8fb" containerName="registry-server" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.111787 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde91ab2-26b5-4442-b790-129bbf13d8fb" containerName="registry-server" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.112016 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde91ab2-26b5-4442-b790-129bbf13d8fb" containerName="registry-server" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.112031 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="42a345f5-d310-4907-8fe5-be3b705c774d" containerName="gather" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.112049 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="42a345f5-d310-4907-8fe5-be3b705c774d" containerName="copy" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.113768 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.118958 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmb7m"] Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.259864 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-utilities\") pod \"redhat-operators-zmb7m\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.259922 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-catalog-content\") pod \"redhat-operators-zmb7m\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.260077 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzzz\" (UniqueName: \"kubernetes.io/projected/50281c85-5649-4d09-ad73-7d71173b0732-kube-api-access-djzzz\") pod \"redhat-operators-zmb7m\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.362098 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djzzz\" (UniqueName: \"kubernetes.io/projected/50281c85-5649-4d09-ad73-7d71173b0732-kube-api-access-djzzz\") pod \"redhat-operators-zmb7m\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.362232 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-utilities\") pod \"redhat-operators-zmb7m\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.362250 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-catalog-content\") pod \"redhat-operators-zmb7m\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.362755 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-catalog-content\") pod \"redhat-operators-zmb7m\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.363023 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-utilities\") pod \"redhat-operators-zmb7m\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.387990 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djzzz\" (UniqueName: \"kubernetes.io/projected/50281c85-5649-4d09-ad73-7d71173b0732-kube-api-access-djzzz\") pod \"redhat-operators-zmb7m\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:25:54 crc kubenswrapper[4805]: I1216 13:25:54.447045 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:25:55 crc kubenswrapper[4805]: I1216 13:25:55.158775 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmb7m"] Dec 16 13:25:55 crc kubenswrapper[4805]: I1216 13:25:55.651254 4805 generic.go:334] "Generic (PLEG): container finished" podID="50281c85-5649-4d09-ad73-7d71173b0732" containerID="c6dae600448b734ce16901cc8ff88da4ced8befd2d981f6cf5a07aa89653d8f4" exitCode=0 Dec 16 13:25:55 crc kubenswrapper[4805]: I1216 13:25:55.651343 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmb7m" event={"ID":"50281c85-5649-4d09-ad73-7d71173b0732","Type":"ContainerDied","Data":"c6dae600448b734ce16901cc8ff88da4ced8befd2d981f6cf5a07aa89653d8f4"} Dec 16 13:25:55 crc kubenswrapper[4805]: I1216 13:25:55.651540 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmb7m" event={"ID":"50281c85-5649-4d09-ad73-7d71173b0732","Type":"ContainerStarted","Data":"87281d70eafc358dede1b8d04c4c815dad21a8c5d1266178066a0c1905d961c7"} Dec 16 13:25:55 crc kubenswrapper[4805]: I1216 13:25:55.653306 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 13:25:56 crc kubenswrapper[4805]: I1216 13:25:56.723350 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmb7m" event={"ID":"50281c85-5649-4d09-ad73-7d71173b0732","Type":"ContainerStarted","Data":"16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc"} Dec 16 13:26:00 crc kubenswrapper[4805]: I1216 13:26:00.819817 4805 generic.go:334] "Generic (PLEG): container finished" podID="50281c85-5649-4d09-ad73-7d71173b0732" containerID="16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc" exitCode=0 Dec 16 13:26:00 crc kubenswrapper[4805]: I1216 13:26:00.819891 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmb7m" event={"ID":"50281c85-5649-4d09-ad73-7d71173b0732","Type":"ContainerDied","Data":"16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc"} Dec 16 13:26:01 crc kubenswrapper[4805]: I1216 13:26:01.830162 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmb7m" event={"ID":"50281c85-5649-4d09-ad73-7d71173b0732","Type":"ContainerStarted","Data":"9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166"} Dec 16 13:26:01 crc kubenswrapper[4805]: I1216 13:26:01.856242 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zmb7m" podStartSLOduration=2.02841208 podStartE2EDuration="7.85620444s" podCreationTimestamp="2025-12-16 13:25:54 +0000 UTC" firstStartedPulling="2025-12-16 13:25:55.652930967 +0000 UTC m=+5429.371188772" lastFinishedPulling="2025-12-16 13:26:01.480723337 +0000 UTC m=+5435.198981132" observedRunningTime="2025-12-16 13:26:01.847379077 +0000 UTC m=+5435.565636882" watchObservedRunningTime="2025-12-16 13:26:01.85620444 +0000 UTC m=+5435.574462255" Dec 16 13:26:04 crc kubenswrapper[4805]: I1216 13:26:04.448122 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:26:04 crc kubenswrapper[4805]: I1216 13:26:04.448461 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:26:05 crc kubenswrapper[4805]: I1216 13:26:05.581856 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zmb7m" podUID="50281c85-5649-4d09-ad73-7d71173b0732" containerName="registry-server" probeResult="failure" output=< Dec 16 13:26:05 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 13:26:05 crc kubenswrapper[4805]: > Dec 16 13:26:14 crc kubenswrapper[4805]: I1216 13:26:14.503808 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:26:14 crc kubenswrapper[4805]: I1216 13:26:14.556284 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:26:14 crc kubenswrapper[4805]: I1216 13:26:14.750206 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmb7m"] Dec 16 13:26:15 crc kubenswrapper[4805]: I1216 13:26:15.960741 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zmb7m" podUID="50281c85-5649-4d09-ad73-7d71173b0732" containerName="registry-server" containerID="cri-o://9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166" gracePeriod=2 Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.463600 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.625591 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-utilities\") pod \"50281c85-5649-4d09-ad73-7d71173b0732\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.625676 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djzzz\" (UniqueName: \"kubernetes.io/projected/50281c85-5649-4d09-ad73-7d71173b0732-kube-api-access-djzzz\") pod \"50281c85-5649-4d09-ad73-7d71173b0732\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.625858 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-catalog-content\") pod \"50281c85-5649-4d09-ad73-7d71173b0732\" (UID: \"50281c85-5649-4d09-ad73-7d71173b0732\") " Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.628600 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-utilities" (OuterVolumeSpecName: "utilities") pod "50281c85-5649-4d09-ad73-7d71173b0732" (UID: "50281c85-5649-4d09-ad73-7d71173b0732"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.632543 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50281c85-5649-4d09-ad73-7d71173b0732-kube-api-access-djzzz" (OuterVolumeSpecName: "kube-api-access-djzzz") pod "50281c85-5649-4d09-ad73-7d71173b0732" (UID: "50281c85-5649-4d09-ad73-7d71173b0732"). InnerVolumeSpecName "kube-api-access-djzzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.728718 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.728959 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djzzz\" (UniqueName: \"kubernetes.io/projected/50281c85-5649-4d09-ad73-7d71173b0732-kube-api-access-djzzz\") on node \"crc\" DevicePath \"\"" Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.745716 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50281c85-5649-4d09-ad73-7d71173b0732" (UID: "50281c85-5649-4d09-ad73-7d71173b0732"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.831889 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50281c85-5649-4d09-ad73-7d71173b0732-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.985069 4805 generic.go:334] "Generic (PLEG): container finished" podID="50281c85-5649-4d09-ad73-7d71173b0732" containerID="9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166" exitCode=0 Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.985371 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmb7m" event={"ID":"50281c85-5649-4d09-ad73-7d71173b0732","Type":"ContainerDied","Data":"9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166"} Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.985406 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmb7m" event={"ID":"50281c85-5649-4d09-ad73-7d71173b0732","Type":"ContainerDied","Data":"87281d70eafc358dede1b8d04c4c815dad21a8c5d1266178066a0c1905d961c7"} Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.985431 4805 scope.go:117] "RemoveContainer" containerID="9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166" Dec 16 13:26:16 crc kubenswrapper[4805]: I1216 13:26:16.985614 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmb7m" Dec 16 13:26:17 crc kubenswrapper[4805]: I1216 13:26:17.032269 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmb7m"] Dec 16 13:26:17 crc kubenswrapper[4805]: I1216 13:26:17.039893 4805 scope.go:117] "RemoveContainer" containerID="16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc" Dec 16 13:26:17 crc kubenswrapper[4805]: I1216 13:26:17.040044 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zmb7m"] Dec 16 13:26:17 crc kubenswrapper[4805]: I1216 13:26:17.061455 4805 scope.go:117] "RemoveContainer" containerID="c6dae600448b734ce16901cc8ff88da4ced8befd2d981f6cf5a07aa89653d8f4" Dec 16 13:26:17 crc kubenswrapper[4805]: I1216 13:26:17.114790 4805 scope.go:117] "RemoveContainer" containerID="9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166" Dec 16 13:26:17 crc kubenswrapper[4805]: E1216 13:26:17.115902 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166\": container with ID starting with 9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166 not found: ID does not exist" containerID="9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166" Dec 16 13:26:17 crc kubenswrapper[4805]: I1216 13:26:17.115959 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166"} err="failed to get container status \"9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166\": rpc error: code = NotFound desc = could not find container \"9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166\": container with ID starting with 9ef61d70d20f3a0108f4a7dc48c153c6beef37b333e82d580713ebb8ba646166 not found: ID does not exist" Dec 16 13:26:17 crc kubenswrapper[4805]: I1216 13:26:17.115989 4805 scope.go:117] "RemoveContainer" containerID="16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc" Dec 16 13:26:17 crc kubenswrapper[4805]: E1216 13:26:17.116485 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc\": container with ID starting with 16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc not found: ID does not exist" containerID="16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc" Dec 16 13:26:17 crc kubenswrapper[4805]: I1216 13:26:17.116525 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc"} err="failed to get container status \"16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc\": rpc error: code = NotFound desc = could not find container \"16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc\": container with ID starting with 16b677e06445758e77e513c5d05ad6e99541a566e522f388e8fb1d634a7b03bc not found: ID does not exist" Dec 16 13:26:17 crc kubenswrapper[4805]: I1216 13:26:17.116561 4805 scope.go:117] "RemoveContainer" containerID="c6dae600448b734ce16901cc8ff88da4ced8befd2d981f6cf5a07aa89653d8f4" Dec 16 13:26:17 crc kubenswrapper[4805]: E1216 13:26:17.116944 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6dae600448b734ce16901cc8ff88da4ced8befd2d981f6cf5a07aa89653d8f4\": container with ID starting with c6dae600448b734ce16901cc8ff88da4ced8befd2d981f6cf5a07aa89653d8f4 not found: ID does not exist" containerID="c6dae600448b734ce16901cc8ff88da4ced8befd2d981f6cf5a07aa89653d8f4" Dec 16 13:26:17 crc kubenswrapper[4805]: I1216 13:26:17.117007 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6dae600448b734ce16901cc8ff88da4ced8befd2d981f6cf5a07aa89653d8f4"} err="failed to get container status \"c6dae600448b734ce16901cc8ff88da4ced8befd2d981f6cf5a07aa89653d8f4\": rpc error: code = NotFound desc = could not find container \"c6dae600448b734ce16901cc8ff88da4ced8befd2d981f6cf5a07aa89653d8f4\": container with ID starting with c6dae600448b734ce16901cc8ff88da4ced8befd2d981f6cf5a07aa89653d8f4 not found: ID does not exist" Dec 16 13:26:18 crc kubenswrapper[4805]: I1216 13:26:18.534655 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50281c85-5649-4d09-ad73-7d71173b0732" path="/var/lib/kubelet/pods/50281c85-5649-4d09-ad73-7d71173b0732/volumes" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.590053 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9f5ft/must-gather-ntr57"] Dec 16 13:26:27 crc kubenswrapper[4805]: E1216 13:26:27.591323 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50281c85-5649-4d09-ad73-7d71173b0732" containerName="extract-utilities" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.591341 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50281c85-5649-4d09-ad73-7d71173b0732" containerName="extract-utilities" Dec 16 13:26:27 crc kubenswrapper[4805]: E1216 13:26:27.591355 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50281c85-5649-4d09-ad73-7d71173b0732" containerName="registry-server" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.591416 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50281c85-5649-4d09-ad73-7d71173b0732" containerName="registry-server" Dec 16 13:26:27 crc kubenswrapper[4805]: E1216 13:26:27.591440 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50281c85-5649-4d09-ad73-7d71173b0732" containerName="extract-content" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.591449 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50281c85-5649-4d09-ad73-7d71173b0732" containerName="extract-content" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.591731 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="50281c85-5649-4d09-ad73-7d71173b0732" containerName="registry-server" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.593087 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/must-gather-ntr57" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.597476 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9f5ft"/"openshift-service-ca.crt" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.598061 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9f5ft"/"default-dockercfg-d8wjk" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.598517 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9f5ft"/"kube-root-ca.crt" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.609023 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9f5ft/must-gather-ntr57"] Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.676187 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5rnw\" (UniqueName: \"kubernetes.io/projected/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-kube-api-access-d5rnw\") pod \"must-gather-ntr57\" (UID: \"ecc77c59-2496-4007-97a8-6f3dee9fc5ba\") " pod="openshift-must-gather-9f5ft/must-gather-ntr57" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.676608 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-must-gather-output\") pod \"must-gather-ntr57\" (UID: \"ecc77c59-2496-4007-97a8-6f3dee9fc5ba\") " pod="openshift-must-gather-9f5ft/must-gather-ntr57" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.777912 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5rnw\" (UniqueName: \"kubernetes.io/projected/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-kube-api-access-d5rnw\") pod \"must-gather-ntr57\" (UID: \"ecc77c59-2496-4007-97a8-6f3dee9fc5ba\") " pod="openshift-must-gather-9f5ft/must-gather-ntr57" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.777975 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-must-gather-output\") pod \"must-gather-ntr57\" (UID: \"ecc77c59-2496-4007-97a8-6f3dee9fc5ba\") " pod="openshift-must-gather-9f5ft/must-gather-ntr57" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.778541 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-must-gather-output\") pod \"must-gather-ntr57\" (UID: \"ecc77c59-2496-4007-97a8-6f3dee9fc5ba\") " pod="openshift-must-gather-9f5ft/must-gather-ntr57" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.818565 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5rnw\" (UniqueName: \"kubernetes.io/projected/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-kube-api-access-d5rnw\") pod \"must-gather-ntr57\" (UID: \"ecc77c59-2496-4007-97a8-6f3dee9fc5ba\") " pod="openshift-must-gather-9f5ft/must-gather-ntr57" Dec 16 13:26:27 crc kubenswrapper[4805]: I1216 13:26:27.914073 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/must-gather-ntr57" Dec 16 13:26:28 crc kubenswrapper[4805]: I1216 13:26:28.450635 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9f5ft/must-gather-ntr57"] Dec 16 13:26:29 crc kubenswrapper[4805]: I1216 13:26:29.109768 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/must-gather-ntr57" event={"ID":"ecc77c59-2496-4007-97a8-6f3dee9fc5ba","Type":"ContainerStarted","Data":"51a24617866229924af05cae5ef8ec6f397139516924f6e8361f83ba5febad14"} Dec 16 13:26:29 crc kubenswrapper[4805]: I1216 13:26:29.110121 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/must-gather-ntr57" event={"ID":"ecc77c59-2496-4007-97a8-6f3dee9fc5ba","Type":"ContainerStarted","Data":"9a4a4d96c31fae872954b0451c4eb11deb8b60aa5b64228478998869bbee2407"} Dec 16 13:26:29 crc kubenswrapper[4805]: I1216 13:26:29.110161 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/must-gather-ntr57" event={"ID":"ecc77c59-2496-4007-97a8-6f3dee9fc5ba","Type":"ContainerStarted","Data":"bae7001fe11f943dde6ddc9fb8aa16b43fa1ac869a41719e0f66464955b9899d"} Dec 16 13:26:29 crc kubenswrapper[4805]: I1216 13:26:29.137534 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9f5ft/must-gather-ntr57" podStartSLOduration=2.137509052 podStartE2EDuration="2.137509052s" podCreationTimestamp="2025-12-16 13:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:26:29.129833492 +0000 UTC m=+5462.848091297" watchObservedRunningTime="2025-12-16 13:26:29.137509052 +0000 UTC m=+5462.855766867" Dec 16 13:26:33 crc kubenswrapper[4805]: I1216 13:26:33.153672 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9f5ft/crc-debug-wsr8z"] Dec 16 13:26:33 crc kubenswrapper[4805]: I1216 13:26:33.155892 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" Dec 16 13:26:33 crc kubenswrapper[4805]: I1216 13:26:33.194862 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhnx7\" (UniqueName: \"kubernetes.io/projected/5b6a8567-7c05-4208-b1f8-5582628d18f2-kube-api-access-jhnx7\") pod \"crc-debug-wsr8z\" (UID: \"5b6a8567-7c05-4208-b1f8-5582628d18f2\") " pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" Dec 16 13:26:33 crc kubenswrapper[4805]: I1216 13:26:33.194969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b6a8567-7c05-4208-b1f8-5582628d18f2-host\") pod \"crc-debug-wsr8z\" (UID: \"5b6a8567-7c05-4208-b1f8-5582628d18f2\") " pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" Dec 16 13:26:33 crc kubenswrapper[4805]: I1216 13:26:33.296893 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhnx7\" (UniqueName: \"kubernetes.io/projected/5b6a8567-7c05-4208-b1f8-5582628d18f2-kube-api-access-jhnx7\") pod \"crc-debug-wsr8z\" (UID: \"5b6a8567-7c05-4208-b1f8-5582628d18f2\") " pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" Dec 16 13:26:33 crc kubenswrapper[4805]: I1216 13:26:33.297582 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b6a8567-7c05-4208-b1f8-5582628d18f2-host\") pod \"crc-debug-wsr8z\" (UID: \"5b6a8567-7c05-4208-b1f8-5582628d18f2\") " pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" Dec 16 13:26:33 crc kubenswrapper[4805]: I1216 13:26:33.297768 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b6a8567-7c05-4208-b1f8-5582628d18f2-host\") pod \"crc-debug-wsr8z\" (UID: \"5b6a8567-7c05-4208-b1f8-5582628d18f2\") " pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" Dec 16 13:26:33 crc kubenswrapper[4805]: I1216 13:26:33.320987 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhnx7\" (UniqueName: \"kubernetes.io/projected/5b6a8567-7c05-4208-b1f8-5582628d18f2-kube-api-access-jhnx7\") pod \"crc-debug-wsr8z\" (UID: \"5b6a8567-7c05-4208-b1f8-5582628d18f2\") " pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" Dec 16 13:26:33 crc kubenswrapper[4805]: I1216 13:26:33.477102 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" Dec 16 13:26:34 crc kubenswrapper[4805]: I1216 13:26:34.158853 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" event={"ID":"5b6a8567-7c05-4208-b1f8-5582628d18f2","Type":"ContainerStarted","Data":"19e59776097c5254204c02b3472aac7aa824c0a0b33ed6d86b473f7c19f329aa"} Dec 16 13:26:34 crc kubenswrapper[4805]: I1216 13:26:34.159367 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" event={"ID":"5b6a8567-7c05-4208-b1f8-5582628d18f2","Type":"ContainerStarted","Data":"01f79758225783fb375b0b50feb187f03e78aae36d2727dda84711e39130c912"} Dec 16 13:26:34 crc kubenswrapper[4805]: I1216 13:26:34.173906 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" podStartSLOduration=1.173887111 podStartE2EDuration="1.173887111s" podCreationTimestamp="2025-12-16 13:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:26:34.171462701 +0000 UTC m=+5467.889720506" watchObservedRunningTime="2025-12-16 13:26:34.173887111 +0000 UTC m=+5467.892144926" Dec 16 13:26:57 crc kubenswrapper[4805]: I1216 13:26:57.071775 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:26:57 crc kubenswrapper[4805]: I1216 13:26:57.072337 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:27:20 crc kubenswrapper[4805]: I1216 13:27:20.937912 4805 generic.go:334] "Generic (PLEG): container finished" podID="5b6a8567-7c05-4208-b1f8-5582628d18f2" containerID="19e59776097c5254204c02b3472aac7aa824c0a0b33ed6d86b473f7c19f329aa" exitCode=0 Dec 16 13:27:20 crc kubenswrapper[4805]: I1216 13:27:20.937975 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" event={"ID":"5b6a8567-7c05-4208-b1f8-5582628d18f2","Type":"ContainerDied","Data":"19e59776097c5254204c02b3472aac7aa824c0a0b33ed6d86b473f7c19f329aa"} Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.089583 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.130394 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9f5ft/crc-debug-wsr8z"] Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.143249 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9f5ft/crc-debug-wsr8z"] Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.274081 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b6a8567-7c05-4208-b1f8-5582628d18f2-host\") pod \"5b6a8567-7c05-4208-b1f8-5582628d18f2\" (UID: \"5b6a8567-7c05-4208-b1f8-5582628d18f2\") " Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.274161 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhnx7\" (UniqueName: \"kubernetes.io/projected/5b6a8567-7c05-4208-b1f8-5582628d18f2-kube-api-access-jhnx7\") pod \"5b6a8567-7c05-4208-b1f8-5582628d18f2\" (UID: \"5b6a8567-7c05-4208-b1f8-5582628d18f2\") " Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.274224 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6a8567-7c05-4208-b1f8-5582628d18f2-host" (OuterVolumeSpecName: "host") pod "5b6a8567-7c05-4208-b1f8-5582628d18f2" (UID: "5b6a8567-7c05-4208-b1f8-5582628d18f2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.274857 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b6a8567-7c05-4208-b1f8-5582628d18f2-host\") on node \"crc\" DevicePath \"\"" Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.282463 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6a8567-7c05-4208-b1f8-5582628d18f2-kube-api-access-jhnx7" (OuterVolumeSpecName: "kube-api-access-jhnx7") pod "5b6a8567-7c05-4208-b1f8-5582628d18f2" (UID: "5b6a8567-7c05-4208-b1f8-5582628d18f2"). InnerVolumeSpecName "kube-api-access-jhnx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.376860 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhnx7\" (UniqueName: \"kubernetes.io/projected/5b6a8567-7c05-4208-b1f8-5582628d18f2-kube-api-access-jhnx7\") on node \"crc\" DevicePath \"\"" Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.534363 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b6a8567-7c05-4208-b1f8-5582628d18f2" path="/var/lib/kubelet/pods/5b6a8567-7c05-4208-b1f8-5582628d18f2/volumes" Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.956960 4805 scope.go:117] "RemoveContainer" containerID="19e59776097c5254204c02b3472aac7aa824c0a0b33ed6d86b473f7c19f329aa" Dec 16 13:27:22 crc kubenswrapper[4805]: I1216 13:27:22.957020 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-wsr8z" Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.532255 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9f5ft/crc-debug-hld2g"] Dec 16 13:27:23 crc kubenswrapper[4805]: E1216 13:27:23.532927 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6a8567-7c05-4208-b1f8-5582628d18f2" containerName="container-00" Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.532941 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6a8567-7c05-4208-b1f8-5582628d18f2" containerName="container-00" Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.533217 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6a8567-7c05-4208-b1f8-5582628d18f2" containerName="container-00" Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.533842 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-hld2g" Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.704066 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4498b65a-539a-4d41-9860-aeb6bc7d7858-host\") pod \"crc-debug-hld2g\" (UID: \"4498b65a-539a-4d41-9860-aeb6bc7d7858\") " pod="openshift-must-gather-9f5ft/crc-debug-hld2g" Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.704121 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p55wv\" (UniqueName: \"kubernetes.io/projected/4498b65a-539a-4d41-9860-aeb6bc7d7858-kube-api-access-p55wv\") pod \"crc-debug-hld2g\" (UID: \"4498b65a-539a-4d41-9860-aeb6bc7d7858\") " pod="openshift-must-gather-9f5ft/crc-debug-hld2g" Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.806711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4498b65a-539a-4d41-9860-aeb6bc7d7858-host\") pod \"crc-debug-hld2g\" (UID: \"4498b65a-539a-4d41-9860-aeb6bc7d7858\") " pod="openshift-must-gather-9f5ft/crc-debug-hld2g" Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.806765 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p55wv\" (UniqueName: \"kubernetes.io/projected/4498b65a-539a-4d41-9860-aeb6bc7d7858-kube-api-access-p55wv\") pod \"crc-debug-hld2g\" (UID: \"4498b65a-539a-4d41-9860-aeb6bc7d7858\") " pod="openshift-must-gather-9f5ft/crc-debug-hld2g" Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.806853 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4498b65a-539a-4d41-9860-aeb6bc7d7858-host\") pod \"crc-debug-hld2g\" (UID: \"4498b65a-539a-4d41-9860-aeb6bc7d7858\") " pod="openshift-must-gather-9f5ft/crc-debug-hld2g" Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.830678 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p55wv\" (UniqueName: \"kubernetes.io/projected/4498b65a-539a-4d41-9860-aeb6bc7d7858-kube-api-access-p55wv\") pod \"crc-debug-hld2g\" (UID: \"4498b65a-539a-4d41-9860-aeb6bc7d7858\") " pod="openshift-must-gather-9f5ft/crc-debug-hld2g" Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.850933 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-hld2g" Dec 16 13:27:23 crc kubenswrapper[4805]: W1216 13:27:23.891903 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4498b65a_539a_4d41_9860_aeb6bc7d7858.slice/crio-310ac4fc44c76d71be1aef4811a5e5335652938aa72d9e2c2332c5e3c8b75fa6 WatchSource:0}: Error finding container 310ac4fc44c76d71be1aef4811a5e5335652938aa72d9e2c2332c5e3c8b75fa6: Status 404 returned error can't find the container with id 310ac4fc44c76d71be1aef4811a5e5335652938aa72d9e2c2332c5e3c8b75fa6 Dec 16 13:27:23 crc kubenswrapper[4805]: I1216 13:27:23.971670 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/crc-debug-hld2g" event={"ID":"4498b65a-539a-4d41-9860-aeb6bc7d7858","Type":"ContainerStarted","Data":"310ac4fc44c76d71be1aef4811a5e5335652938aa72d9e2c2332c5e3c8b75fa6"} Dec 16 13:27:24 crc kubenswrapper[4805]: I1216 13:27:24.981575 4805 generic.go:334] "Generic (PLEG): container finished" podID="4498b65a-539a-4d41-9860-aeb6bc7d7858" containerID="abeaf6b1ef574b0344d7d01611ad5d8b46eaece3c1ec8de803cb0afb9c3ae413" exitCode=0 Dec 16 13:27:24 crc kubenswrapper[4805]: I1216 13:27:24.982396 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/crc-debug-hld2g" event={"ID":"4498b65a-539a-4d41-9860-aeb6bc7d7858","Type":"ContainerDied","Data":"abeaf6b1ef574b0344d7d01611ad5d8b46eaece3c1ec8de803cb0afb9c3ae413"} Dec 16 13:27:26 crc kubenswrapper[4805]: I1216 13:27:26.143294 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-hld2g" Dec 16 13:27:26 crc kubenswrapper[4805]: I1216 13:27:26.258381 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4498b65a-539a-4d41-9860-aeb6bc7d7858-host\") pod \"4498b65a-539a-4d41-9860-aeb6bc7d7858\" (UID: \"4498b65a-539a-4d41-9860-aeb6bc7d7858\") " Dec 16 13:27:26 crc kubenswrapper[4805]: I1216 13:27:26.258510 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4498b65a-539a-4d41-9860-aeb6bc7d7858-host" (OuterVolumeSpecName: "host") pod "4498b65a-539a-4d41-9860-aeb6bc7d7858" (UID: "4498b65a-539a-4d41-9860-aeb6bc7d7858"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 13:27:26 crc kubenswrapper[4805]: I1216 13:27:26.258548 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p55wv\" (UniqueName: \"kubernetes.io/projected/4498b65a-539a-4d41-9860-aeb6bc7d7858-kube-api-access-p55wv\") pod \"4498b65a-539a-4d41-9860-aeb6bc7d7858\" (UID: \"4498b65a-539a-4d41-9860-aeb6bc7d7858\") " Dec 16 13:27:26 crc kubenswrapper[4805]: I1216 13:27:26.258983 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4498b65a-539a-4d41-9860-aeb6bc7d7858-host\") on node \"crc\" DevicePath \"\"" Dec 16 13:27:26 crc kubenswrapper[4805]: I1216 13:27:26.299374 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4498b65a-539a-4d41-9860-aeb6bc7d7858-kube-api-access-p55wv" (OuterVolumeSpecName: "kube-api-access-p55wv") pod "4498b65a-539a-4d41-9860-aeb6bc7d7858" (UID: "4498b65a-539a-4d41-9860-aeb6bc7d7858"). InnerVolumeSpecName "kube-api-access-p55wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:27:26 crc kubenswrapper[4805]: I1216 13:27:26.360549 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p55wv\" (UniqueName: \"kubernetes.io/projected/4498b65a-539a-4d41-9860-aeb6bc7d7858-kube-api-access-p55wv\") on node \"crc\" DevicePath \"\"" Dec 16 13:27:27 crc kubenswrapper[4805]: I1216 13:27:27.011308 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/crc-debug-hld2g" event={"ID":"4498b65a-539a-4d41-9860-aeb6bc7d7858","Type":"ContainerDied","Data":"310ac4fc44c76d71be1aef4811a5e5335652938aa72d9e2c2332c5e3c8b75fa6"} Dec 16 13:27:27 crc kubenswrapper[4805]: I1216 13:27:27.011434 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310ac4fc44c76d71be1aef4811a5e5335652938aa72d9e2c2332c5e3c8b75fa6" Dec 16 13:27:27 crc kubenswrapper[4805]: I1216 13:27:27.011506 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-hld2g" Dec 16 13:27:27 crc kubenswrapper[4805]: I1216 13:27:27.071638 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:27:27 crc kubenswrapper[4805]: I1216 13:27:27.071689 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:27:27 crc kubenswrapper[4805]: I1216 13:27:27.402373 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9f5ft/crc-debug-hld2g"] Dec 16 13:27:27 crc kubenswrapper[4805]: I1216 13:27:27.415026 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9f5ft/crc-debug-hld2g"] Dec 16 13:27:28 crc kubenswrapper[4805]: I1216 13:27:28.534293 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4498b65a-539a-4d41-9860-aeb6bc7d7858" path="/var/lib/kubelet/pods/4498b65a-539a-4d41-9860-aeb6bc7d7858/volumes" Dec 16 13:27:28 crc kubenswrapper[4805]: I1216 13:27:28.799136 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9f5ft/crc-debug-grv7r"] Dec 16 13:27:28 crc kubenswrapper[4805]: E1216 13:27:28.808337 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4498b65a-539a-4d41-9860-aeb6bc7d7858" containerName="container-00" Dec 16 13:27:28 crc kubenswrapper[4805]: I1216 13:27:28.808356 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4498b65a-539a-4d41-9860-aeb6bc7d7858" containerName="container-00" Dec 16 13:27:28 crc kubenswrapper[4805]: I1216 13:27:28.808560 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4498b65a-539a-4d41-9860-aeb6bc7d7858" containerName="container-00" Dec 16 13:27:28 crc kubenswrapper[4805]: I1216 13:27:28.809250 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-grv7r" Dec 16 13:27:28 crc kubenswrapper[4805]: I1216 13:27:28.820088 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be06ea4a-900e-42b5-b288-8769d436b5ab-host\") pod \"crc-debug-grv7r\" (UID: \"be06ea4a-900e-42b5-b288-8769d436b5ab\") " pod="openshift-must-gather-9f5ft/crc-debug-grv7r" Dec 16 13:27:28 crc kubenswrapper[4805]: I1216 13:27:28.820194 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndcm\" (UniqueName: \"kubernetes.io/projected/be06ea4a-900e-42b5-b288-8769d436b5ab-kube-api-access-rndcm\") pod \"crc-debug-grv7r\" (UID: \"be06ea4a-900e-42b5-b288-8769d436b5ab\") " pod="openshift-must-gather-9f5ft/crc-debug-grv7r" Dec 16 13:27:28 crc kubenswrapper[4805]: I1216 13:27:28.922991 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rndcm\" (UniqueName: \"kubernetes.io/projected/be06ea4a-900e-42b5-b288-8769d436b5ab-kube-api-access-rndcm\") pod \"crc-debug-grv7r\" (UID: \"be06ea4a-900e-42b5-b288-8769d436b5ab\") " pod="openshift-must-gather-9f5ft/crc-debug-grv7r" Dec 16 13:27:28 crc kubenswrapper[4805]: I1216 13:27:28.923299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be06ea4a-900e-42b5-b288-8769d436b5ab-host\") pod \"crc-debug-grv7r\" (UID: \"be06ea4a-900e-42b5-b288-8769d436b5ab\") " pod="openshift-must-gather-9f5ft/crc-debug-grv7r" Dec 16 13:27:28 crc kubenswrapper[4805]: I1216 13:27:28.923427 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be06ea4a-900e-42b5-b288-8769d436b5ab-host\") pod \"crc-debug-grv7r\" (UID: \"be06ea4a-900e-42b5-b288-8769d436b5ab\") " pod="openshift-must-gather-9f5ft/crc-debug-grv7r" Dec 16 13:27:28 crc kubenswrapper[4805]: I1216 13:27:28.947313 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rndcm\" (UniqueName: \"kubernetes.io/projected/be06ea4a-900e-42b5-b288-8769d436b5ab-kube-api-access-rndcm\") pod \"crc-debug-grv7r\" (UID: \"be06ea4a-900e-42b5-b288-8769d436b5ab\") " pod="openshift-must-gather-9f5ft/crc-debug-grv7r" Dec 16 13:27:29 crc kubenswrapper[4805]: I1216 13:27:29.127609 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-grv7r" Dec 16 13:27:29 crc kubenswrapper[4805]: W1216 13:27:29.177485 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe06ea4a_900e_42b5_b288_8769d436b5ab.slice/crio-a2cb06fd4ea76a4e22e1f4d01d4cd398b4435bc0151bcbf89fcb6a85d7e32466 WatchSource:0}: Error finding container a2cb06fd4ea76a4e22e1f4d01d4cd398b4435bc0151bcbf89fcb6a85d7e32466: Status 404 returned error can't find the container with id a2cb06fd4ea76a4e22e1f4d01d4cd398b4435bc0151bcbf89fcb6a85d7e32466 Dec 16 13:27:30 crc kubenswrapper[4805]: I1216 13:27:30.042796 4805 generic.go:334] "Generic (PLEG): container finished" podID="be06ea4a-900e-42b5-b288-8769d436b5ab" containerID="537358e9dac4fdc069c0e57bfe55a9d4d10a6c8731ea2e4ea74b8d84323bbe6d" exitCode=0 Dec 16 13:27:30 crc kubenswrapper[4805]: I1216 13:27:30.042888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/crc-debug-grv7r" event={"ID":"be06ea4a-900e-42b5-b288-8769d436b5ab","Type":"ContainerDied","Data":"537358e9dac4fdc069c0e57bfe55a9d4d10a6c8731ea2e4ea74b8d84323bbe6d"} Dec 16 13:27:30 crc kubenswrapper[4805]: I1216 13:27:30.043122 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/crc-debug-grv7r" event={"ID":"be06ea4a-900e-42b5-b288-8769d436b5ab","Type":"ContainerStarted","Data":"a2cb06fd4ea76a4e22e1f4d01d4cd398b4435bc0151bcbf89fcb6a85d7e32466"} Dec 16 13:27:30 crc kubenswrapper[4805]: I1216 13:27:30.114922 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9f5ft/crc-debug-grv7r"] Dec 16 13:27:30 crc kubenswrapper[4805]: I1216 13:27:30.132333 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9f5ft/crc-debug-grv7r"] Dec 16 13:27:31 crc kubenswrapper[4805]: I1216 13:27:31.159764 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-grv7r" Dec 16 13:27:31 crc kubenswrapper[4805]: I1216 13:27:31.265569 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be06ea4a-900e-42b5-b288-8769d436b5ab-host\") pod \"be06ea4a-900e-42b5-b288-8769d436b5ab\" (UID: \"be06ea4a-900e-42b5-b288-8769d436b5ab\") " Dec 16 13:27:31 crc kubenswrapper[4805]: I1216 13:27:31.265844 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rndcm\" (UniqueName: \"kubernetes.io/projected/be06ea4a-900e-42b5-b288-8769d436b5ab-kube-api-access-rndcm\") pod \"be06ea4a-900e-42b5-b288-8769d436b5ab\" (UID: \"be06ea4a-900e-42b5-b288-8769d436b5ab\") " Dec 16 13:27:31 crc kubenswrapper[4805]: I1216 13:27:31.266032 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be06ea4a-900e-42b5-b288-8769d436b5ab-host" (OuterVolumeSpecName: "host") pod "be06ea4a-900e-42b5-b288-8769d436b5ab" (UID: "be06ea4a-900e-42b5-b288-8769d436b5ab"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 16 13:27:31 crc kubenswrapper[4805]: I1216 13:27:31.266370 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be06ea4a-900e-42b5-b288-8769d436b5ab-host\") on node \"crc\" DevicePath \"\"" Dec 16 13:27:31 crc kubenswrapper[4805]: I1216 13:27:31.272037 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be06ea4a-900e-42b5-b288-8769d436b5ab-kube-api-access-rndcm" (OuterVolumeSpecName: "kube-api-access-rndcm") pod "be06ea4a-900e-42b5-b288-8769d436b5ab" (UID: "be06ea4a-900e-42b5-b288-8769d436b5ab"). InnerVolumeSpecName "kube-api-access-rndcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:27:31 crc kubenswrapper[4805]: I1216 13:27:31.369953 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rndcm\" (UniqueName: \"kubernetes.io/projected/be06ea4a-900e-42b5-b288-8769d436b5ab-kube-api-access-rndcm\") on node \"crc\" DevicePath \"\"" Dec 16 13:27:32 crc kubenswrapper[4805]: I1216 13:27:32.064191 4805 scope.go:117] "RemoveContainer" containerID="537358e9dac4fdc069c0e57bfe55a9d4d10a6c8731ea2e4ea74b8d84323bbe6d" Dec 16 13:27:32 crc kubenswrapper[4805]: I1216 13:27:32.064252 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/crc-debug-grv7r" Dec 16 13:27:32 crc kubenswrapper[4805]: I1216 13:27:32.533115 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be06ea4a-900e-42b5-b288-8769d436b5ab" path="/var/lib/kubelet/pods/be06ea4a-900e-42b5-b288-8769d436b5ab/volumes" Dec 16 13:27:57 crc kubenswrapper[4805]: I1216 13:27:57.071460 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:27:57 crc kubenswrapper[4805]: I1216 13:27:57.072155 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:27:57 crc kubenswrapper[4805]: I1216 13:27:57.072239 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 13:27:57 crc kubenswrapper[4805]: I1216 13:27:57.073127 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e40366d476e3b90378575a27eb523bfc7438df121ddfab745128aced0f99f31"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 13:27:57 crc kubenswrapper[4805]: I1216 13:27:57.073222 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://5e40366d476e3b90378575a27eb523bfc7438df121ddfab745128aced0f99f31" gracePeriod=600 Dec 16 13:27:57 crc kubenswrapper[4805]: I1216 13:27:57.478356 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="5e40366d476e3b90378575a27eb523bfc7438df121ddfab745128aced0f99f31" exitCode=0 Dec 16 13:27:57 crc kubenswrapper[4805]: I1216 13:27:57.478412 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"5e40366d476e3b90378575a27eb523bfc7438df121ddfab745128aced0f99f31"} Dec 16 13:27:57 crc kubenswrapper[4805]: I1216 13:27:57.478615 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924"} Dec 16 13:27:57 crc kubenswrapper[4805]: I1216 13:27:57.478657 4805 scope.go:117] "RemoveContainer" containerID="0506916a5a96babc6dc737fb0d44305d59bb9b7e0bea3f8574156f6fd2e49fa2" Dec 16 13:28:05 crc kubenswrapper[4805]: I1216 13:28:05.778202 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c96b7bb8b-r7k4g_7d8752f0-5469-43e4-9284-8dd712bfd63f/barbican-api/0.log" Dec 16 13:28:05 crc kubenswrapper[4805]: I1216 13:28:05.952999 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c96b7bb8b-r7k4g_7d8752f0-5469-43e4-9284-8dd712bfd63f/barbican-api-log/0.log" Dec 16 13:28:06 crc kubenswrapper[4805]: I1216 13:28:06.106749 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64dcddf9dd-sdrs9_beba05fa-4b9d-44f2-88f4-87611a38604b/barbican-keystone-listener/0.log" Dec 16 13:28:06 crc kubenswrapper[4805]: I1216 13:28:06.181488 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64dcddf9dd-sdrs9_beba05fa-4b9d-44f2-88f4-87611a38604b/barbican-keystone-listener-log/0.log" Dec 16 13:28:06 crc kubenswrapper[4805]: I1216 13:28:06.340782 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c4448787f-5pqnk_c60ef5b9-ec24-43c3-ab83-7a6f10a972bc/barbican-worker/0.log" Dec 16 13:28:06 crc kubenswrapper[4805]: I1216 13:28:06.407979 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c4448787f-5pqnk_c60ef5b9-ec24-43c3-ab83-7a6f10a972bc/barbican-worker-log/0.log" Dec 16 13:28:06 crc kubenswrapper[4805]: I1216 13:28:06.509865 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-w9mmb_d0396b0a-2aae-4507-a31e-cbd5f936f3eb/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:06 crc kubenswrapper[4805]: I1216 13:28:06.776979 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_129d76c2-c0d0-4b1f-8157-ecc2abae65be/ceilometer-central-agent/0.log" Dec 16 13:28:06 crc kubenswrapper[4805]: I1216 13:28:06.791358 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_129d76c2-c0d0-4b1f-8157-ecc2abae65be/proxy-httpd/0.log" Dec 16 13:28:06 crc kubenswrapper[4805]: I1216 13:28:06.841273 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_129d76c2-c0d0-4b1f-8157-ecc2abae65be/ceilometer-notification-agent/0.log" Dec 16 13:28:06 crc kubenswrapper[4805]: I1216 13:28:06.999119 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_129d76c2-c0d0-4b1f-8157-ecc2abae65be/sg-core/0.log" Dec 16 13:28:07 crc kubenswrapper[4805]: I1216 13:28:07.090892 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bdfe5fb7-a1ea-4cd6-887d-46b30ece2329/cinder-api-log/0.log" Dec 16 13:28:07 crc kubenswrapper[4805]: I1216 13:28:07.156732 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bdfe5fb7-a1ea-4cd6-887d-46b30ece2329/cinder-api/0.log" Dec 16 13:28:07 crc kubenswrapper[4805]: I1216 13:28:07.342477 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_67f02ee6-d5c4-4010-983e-c4ee5e24a2c0/probe/0.log" Dec 16 13:28:07 crc kubenswrapper[4805]: I1216 13:28:07.420612 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_67f02ee6-d5c4-4010-983e-c4ee5e24a2c0/cinder-scheduler/0.log" Dec 16 13:28:07 crc kubenswrapper[4805]: I1216 13:28:07.591070 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-dwnr7_9b007dae-6dbd-429b-85a3-a2087c098b68/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:07 crc kubenswrapper[4805]: I1216 13:28:07.692497 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6ckbk_9cb116b7-43db-435a-b4b1-59447b57c611/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:07 crc kubenswrapper[4805]: I1216 13:28:07.845592 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-79dc84bdb7-mnp8m_d507896c-ad5d-4fd8-9df2-22feaa838e8f/init/0.log" Dec 16 13:28:08 crc kubenswrapper[4805]: I1216 13:28:08.065162 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-79dc84bdb7-mnp8m_d507896c-ad5d-4fd8-9df2-22feaa838e8f/init/0.log" Dec 16 13:28:08 crc kubenswrapper[4805]: I1216 13:28:08.179132 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-k6pv9_d062ee72-cb69-4cdc-93fe-1474435c0904/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:08 crc kubenswrapper[4805]: I1216 13:28:08.286219 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-79dc84bdb7-mnp8m_d507896c-ad5d-4fd8-9df2-22feaa838e8f/dnsmasq-dns/0.log" Dec 16 13:28:08 crc kubenswrapper[4805]: I1216 13:28:08.456165 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_645792d6-df6f-4e8c-a3cb-0b150ff5cd37/glance-log/0.log" Dec 16 13:28:08 crc kubenswrapper[4805]: I1216 13:28:08.486697 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_645792d6-df6f-4e8c-a3cb-0b150ff5cd37/glance-httpd/0.log" Dec 16 13:28:08 crc kubenswrapper[4805]: I1216 13:28:08.742481 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b7ccad5e-bb55-4439-964a-2830bacf95e2/glance-log/0.log" Dec 16 13:28:08 crc kubenswrapper[4805]: I1216 13:28:08.753953 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b7ccad5e-bb55-4439-964a-2830bacf95e2/glance-httpd/0.log" Dec 16 13:28:08 crc kubenswrapper[4805]: I1216 13:28:08.961698 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69799999fb-rbm4h_d4af4b9e-77b2-4f27-8148-7000d60f2266/horizon/0.log" Dec 16 13:28:09 crc kubenswrapper[4805]: I1216 13:28:09.271503 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-xl9l2_cb170dbb-2c7a-417a-8254-849165c08ef4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:09 crc kubenswrapper[4805]: I1216 13:28:09.441133 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-brghs_c3d927b4-1bc7-4093-ad62-19dd87d9888a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:09 crc kubenswrapper[4805]: I1216 13:28:09.888861 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29431501-jlgq8_aad6bcdb-7a23-48fa-b79c-69932357cf9f/keystone-cron/0.log" Dec 16 13:28:09 crc kubenswrapper[4805]: I1216 13:28:09.971288 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-69799999fb-rbm4h_d4af4b9e-77b2-4f27-8148-7000d60f2266/horizon-log/0.log" Dec 16 13:28:10 crc kubenswrapper[4805]: I1216 13:28:10.184688 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e3d98762-b270-4f16-8dce-26f0662152ad/kube-state-metrics/0.log" Dec 16 13:28:10 crc kubenswrapper[4805]: I1216 13:28:10.351601 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7ff49bdd98-ng49z_234428eb-9306-44b8-baf0-e4c6c0772699/keystone-api/0.log" Dec 16 13:28:11 crc kubenswrapper[4805]: I1216 13:28:11.123931 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4ft55_8b5953ad-0a78-4483-9097-2d4de5ad084e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:11 crc kubenswrapper[4805]: I1216 13:28:11.894295 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-57fbfd7dcc-lq2v9_26f9167a-aa3e-4381-aac0-e0aaea7449a8/neutron-httpd/0.log" Dec 16 13:28:12 crc kubenswrapper[4805]: I1216 13:28:12.003332 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-57fbfd7dcc-lq2v9_26f9167a-aa3e-4381-aac0-e0aaea7449a8/neutron-api/0.log" Dec 16 13:28:12 crc kubenswrapper[4805]: I1216 13:28:12.443852 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-xpv4g_f673c96e-f755-49d0-90ed-46ac92e151c2/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:13 crc kubenswrapper[4805]: I1216 13:28:13.344087 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7be1f6a3-3b42-4981-86ff-8851e931cd97/nova-cell0-conductor-conductor/0.log" Dec 16 13:28:13 crc kubenswrapper[4805]: I1216 13:28:13.562481 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a1c8741c-c8da-42f0-9ef8-9e419b58dcf4/nova-cell1-conductor-conductor/0.log" Dec 16 13:28:13 crc kubenswrapper[4805]: I1216 13:28:13.968558 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_31340444-d4f0-468b-9acb-ca27b87165a9/nova-api-log/0.log" Dec 16 13:28:14 crc kubenswrapper[4805]: I1216 13:28:14.224105 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_67131b33-530e-49eb-9e82-cfbe1a05a5f9/nova-cell1-novncproxy-novncproxy/0.log" Dec 16 13:28:14 crc kubenswrapper[4805]: I1216 13:28:14.396767 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-mwk4q_128a7ec0-e80f-4147-a459-283405d9c838/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:14 crc kubenswrapper[4805]: I1216 13:28:14.670257 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_31340444-d4f0-468b-9acb-ca27b87165a9/nova-api-api/0.log" Dec 16 13:28:14 crc kubenswrapper[4805]: I1216 13:28:14.737229 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ba40942d-c8ca-45da-b36d-7e447dac985e/nova-metadata-log/0.log" Dec 16 13:28:15 crc kubenswrapper[4805]: I1216 13:28:15.209982 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b009857-5d9e-4d6e-979c-d7fc3357bd66/mysql-bootstrap/0.log" Dec 16 13:28:15 crc kubenswrapper[4805]: I1216 13:28:15.437578 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b009857-5d9e-4d6e-979c-d7fc3357bd66/mysql-bootstrap/0.log" Dec 16 13:28:15 crc kubenswrapper[4805]: I1216 13:28:15.528609 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b009857-5d9e-4d6e-979c-d7fc3357bd66/galera/0.log" Dec 16 13:28:15 crc kubenswrapper[4805]: I1216 13:28:15.712104 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9ff906a7-6277-4d2e-b804-4d8e006cab7d/nova-scheduler-scheduler/0.log" Dec 16 13:28:15 crc kubenswrapper[4805]: I1216 13:28:15.822323 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_76cc6f3a-504f-4096-8c08-efbcb51ad101/memcached/0.log" Dec 16 13:28:15 crc kubenswrapper[4805]: I1216 13:28:15.872410 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd66837a-9e6f-41fd-91a0-f010e02a3a80/mysql-bootstrap/0.log" Dec 16 13:28:16 crc kubenswrapper[4805]: I1216 13:28:16.176431 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd66837a-9e6f-41fd-91a0-f010e02a3a80/mysql-bootstrap/0.log" Dec 16 13:28:16 crc kubenswrapper[4805]: I1216 13:28:16.235641 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_61335104-9325-4589-bcbb-fb19f4273dc2/openstackclient/0.log" Dec 16 13:28:16 crc kubenswrapper[4805]: I1216 13:28:16.337314 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd66837a-9e6f-41fd-91a0-f010e02a3a80/galera/0.log" Dec 16 13:28:16 crc kubenswrapper[4805]: I1216 13:28:16.608534 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ba40942d-c8ca-45da-b36d-7e447dac985e/nova-metadata-metadata/0.log" Dec 16 13:28:16 crc kubenswrapper[4805]: I1216 13:28:16.624325 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-j76qb_2d3d6086-fcc3-4ba0-af90-445bbcb3ff06/openstack-network-exporter/0.log" Dec 16 13:28:16 crc kubenswrapper[4805]: I1216 13:28:16.634878 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9xw22_f3e4543d-c48b-45a2-8eea-2584d5bba4b6/ovn-controller/0.log" Dec 16 13:28:17 crc kubenswrapper[4805]: I1216 13:28:17.054970 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffmtv_b7c2d7ad-f96b-4eaa-b498-46e0739154f1/ovsdb-server-init/0.log" Dec 16 13:28:17 crc kubenswrapper[4805]: I1216 13:28:17.301083 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-n2stl_6cf38957-b778-49fe-9dd0-c629e23fb773/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:17 crc kubenswrapper[4805]: I1216 13:28:17.336662 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffmtv_b7c2d7ad-f96b-4eaa-b498-46e0739154f1/ovsdb-server-init/0.log" Dec 16 13:28:17 crc kubenswrapper[4805]: I1216 13:28:17.367389 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffmtv_b7c2d7ad-f96b-4eaa-b498-46e0739154f1/ovs-vswitchd/0.log" Dec 16 13:28:17 crc kubenswrapper[4805]: I1216 13:28:17.421288 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffmtv_b7c2d7ad-f96b-4eaa-b498-46e0739154f1/ovsdb-server/0.log" Dec 16 13:28:17 crc kubenswrapper[4805]: I1216 13:28:17.618487 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_87517712-55f8-42c7-8a23-cb388090ed3c/openstack-network-exporter/0.log" Dec 16 13:28:17 crc kubenswrapper[4805]: I1216 13:28:17.652759 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_39d101de-9ba7-46dc-830e-3c25397a64d2/openstack-network-exporter/0.log" Dec 16 13:28:17 crc kubenswrapper[4805]: I1216 13:28:17.673528 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_87517712-55f8-42c7-8a23-cb388090ed3c/ovn-northd/0.log" Dec 16 13:28:17 crc kubenswrapper[4805]: I1216 13:28:17.861446 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_39d101de-9ba7-46dc-830e-3c25397a64d2/ovsdbserver-nb/0.log" Dec 16 13:28:17 crc kubenswrapper[4805]: I1216 13:28:17.912095 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dfbd99cf-6302-48c2-b119-1378b47d7c6d/openstack-network-exporter/0.log" Dec 16 13:28:17 crc kubenswrapper[4805]: I1216 13:28:17.915960 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dfbd99cf-6302-48c2-b119-1378b47d7c6d/ovsdbserver-sb/0.log" Dec 16 13:28:18 crc kubenswrapper[4805]: I1216 13:28:18.260452 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09870268-6496-4840-bd93-b9ae456cb54a/setup-container/0.log" Dec 16 13:28:18 crc kubenswrapper[4805]: I1216 13:28:18.372835 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-775868cbc4-vvjnt_2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e/placement-api/0.log" Dec 16 13:28:18 crc kubenswrapper[4805]: I1216 13:28:18.394188 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-775868cbc4-vvjnt_2ba4f0e8-aa7e-40ae-9a2b-de40b6bdb69e/placement-log/0.log" Dec 16 13:28:18 crc kubenswrapper[4805]: I1216 13:28:18.557041 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09870268-6496-4840-bd93-b9ae456cb54a/rabbitmq/0.log" Dec 16 13:28:18 crc kubenswrapper[4805]: I1216 13:28:18.583643 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09870268-6496-4840-bd93-b9ae456cb54a/setup-container/0.log" Dec 16 13:28:18 crc kubenswrapper[4805]: I1216 13:28:18.668104 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_954917f7-4d5d-4dac-9621-f3c281539cf0/setup-container/0.log" Dec 16 13:28:18 crc kubenswrapper[4805]: I1216 13:28:18.946028 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_954917f7-4d5d-4dac-9621-f3c281539cf0/setup-container/0.log" Dec 16 13:28:18 crc kubenswrapper[4805]: I1216 13:28:18.955873 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4n6v4_de881c10-2738-4b4a-9d44-8397ba3fc6b7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:18 crc kubenswrapper[4805]: I1216 13:28:18.965633 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_954917f7-4d5d-4dac-9621-f3c281539cf0/rabbitmq/0.log" Dec 16 13:28:19 crc kubenswrapper[4805]: I1216 13:28:19.170207 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-pphv2_99e1dabe-dc85-4cd8-a53f-0ae55dfb4fec/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:19 crc kubenswrapper[4805]: I1216 13:28:19.203251 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-99pfm_40b28c68-1737-4ad9-a361-43581b880c4b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:19 crc kubenswrapper[4805]: I1216 13:28:19.251659 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-s5wgr_3c058268-e16d-417e-8375-014b2cd1d3a5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:19 crc kubenswrapper[4805]: I1216 13:28:19.652700 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jw42t_2af0c7ba-b7ca-40cd-9443-f9cb126211b0/ssh-known-hosts-edpm-deployment/0.log" Dec 16 13:28:19 crc kubenswrapper[4805]: I1216 13:28:19.698264 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58c65944b9-fbmdw_c59b441a-f0f3-44c5-a0b8-42f00c60da72/proxy-httpd/0.log" Dec 16 13:28:19 crc kubenswrapper[4805]: I1216 13:28:19.734840 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58c65944b9-fbmdw_c59b441a-f0f3-44c5-a0b8-42f00c60da72/proxy-server/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.031240 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-xsqmm_09006e2e-f33c-4d0b-a0d3-36ea5e30b9b5/swift-ring-rebalance/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.220369 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/account-auditor/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.230530 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/account-reaper/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.361197 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/account-replicator/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.396196 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/account-server/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.447720 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/container-auditor/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.544377 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/container-replicator/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.627934 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/container-updater/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.628679 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/container-server/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.679249 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/object-auditor/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.741479 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/object-expirer/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.827353 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/object-replicator/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.897169 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/object-updater/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.910740 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/object-server/0.log" Dec 16 13:28:20 crc kubenswrapper[4805]: I1216 13:28:20.948888 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/rsync/0.log" Dec 16 13:28:21 crc kubenswrapper[4805]: I1216 13:28:21.025594 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c7ed9b27-9804-4584-a244-30ba1f033e17/swift-recon-cron/0.log" Dec 16 13:28:21 crc kubenswrapper[4805]: I1216 13:28:21.245368 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_96a2c3a4-408a-4437-9a22-bc7c41f87222/tempest-tests-tempest-tests-runner/0.log" Dec 16 13:28:21 crc kubenswrapper[4805]: I1216 13:28:21.329071 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-r2qd6_e4b7d191-e86d-4386-935b-e3ce28794d6d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:21 crc kubenswrapper[4805]: I1216 13:28:21.432506 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_5858c1f1-d24f-4e97-85a1-b84b85c6a0ce/test-operator-logs-container/0.log" Dec 16 13:28:21 crc kubenswrapper[4805]: I1216 13:28:21.542762 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-xxt7f_87d9250a-d08e-4cfe-9619-d48ebdb2753c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 16 13:28:47 crc kubenswrapper[4805]: I1216 13:28:47.769986 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/util/0.log" Dec 16 13:28:47 crc kubenswrapper[4805]: I1216 13:28:47.914213 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/pull/0.log" Dec 16 13:28:47 crc kubenswrapper[4805]: I1216 13:28:47.943314 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/util/0.log" Dec 16 13:28:48 crc kubenswrapper[4805]: I1216 13:28:48.000006 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/pull/0.log" Dec 16 13:28:48 crc kubenswrapper[4805]: I1216 13:28:48.196323 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/util/0.log" Dec 16 13:28:48 crc kubenswrapper[4805]: I1216 13:28:48.224499 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/pull/0.log" Dec 16 13:28:48 crc kubenswrapper[4805]: I1216 13:28:48.245279 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_128742218d0fe9c7a4bf3cebf5394ecd7750f2f3a9c9e82be094cdaaa34lbrq_8da86235-63b0-46ec-bca8-b68c248b2daa/extract/0.log" Dec 16 13:28:48 crc kubenswrapper[4805]: I1216 13:28:48.388475 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-bb565c8dd-5gtrz_81b6ebe6-984f-4ecc-9d75-ac78097f7af2/kube-rbac-proxy/0.log" Dec 16 13:28:48 crc kubenswrapper[4805]: I1216 13:28:48.514428 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-bb565c8dd-5gtrz_81b6ebe6-984f-4ecc-9d75-ac78097f7af2/manager/0.log" Dec 16 13:28:48 crc kubenswrapper[4805]: I1216 13:28:48.546562 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-669b58f65-782cb_f31accc0-70b7-4014-ac71-679dc729ed80/kube-rbac-proxy/0.log" Dec 16 13:28:48 crc kubenswrapper[4805]: I1216 13:28:48.661378 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-669b58f65-782cb_f31accc0-70b7-4014-ac71-679dc729ed80/manager/0.log" Dec 16 13:28:48 crc kubenswrapper[4805]: I1216 13:28:48.780329 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-69977bdf55-t27s6_4a51f724-a3be-4ef5-acd6-84891873147b/kube-rbac-proxy/0.log" Dec 16 13:28:48 crc kubenswrapper[4805]: I1216 13:28:48.845110 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-69977bdf55-t27s6_4a51f724-a3be-4ef5-acd6-84891873147b/manager/0.log" Dec 16 13:28:48 crc kubenswrapper[4805]: I1216 13:28:48.925448 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5847f67c56-pp6mh_b738fc79-2b52-4759-a3ee-72e0946df392/kube-rbac-proxy/0.log" Dec 16 13:28:49 crc kubenswrapper[4805]: I1216 13:28:49.056839 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5847f67c56-pp6mh_b738fc79-2b52-4759-a3ee-72e0946df392/manager/0.log" Dec 16 13:28:49 crc kubenswrapper[4805]: I1216 13:28:49.127672 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-7b45cd6d68-g5msx_37dfd47c-0789-4054-a4c7-37cff4d15b15/manager/0.log" Dec 16 13:28:49 crc kubenswrapper[4805]: I1216 13:28:49.196707 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-7b45cd6d68-g5msx_37dfd47c-0789-4054-a4c7-37cff4d15b15/kube-rbac-proxy/0.log" Dec 16 13:28:49 crc kubenswrapper[4805]: I1216 13:28:49.306196 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6985cf78fb-rfvkc_81719820-96fa-418d-9d0b-18ba90027850/kube-rbac-proxy/0.log" Dec 16 13:28:49 crc kubenswrapper[4805]: I1216 13:28:49.371925 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6985cf78fb-rfvkc_81719820-96fa-418d-9d0b-18ba90027850/manager/0.log" Dec 16 13:28:49 crc kubenswrapper[4805]: I1216 13:28:49.546968 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-85d55b5858-4gk2l_e817150a-4845-4d56-8dd0-229394b946db/kube-rbac-proxy/0.log" Dec 16 13:28:49 crc kubenswrapper[4805]: I1216 13:28:49.636238 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-85d55b5858-4gk2l_e817150a-4845-4d56-8dd0-229394b946db/manager/0.log" Dec 16 13:28:49 crc kubenswrapper[4805]: I1216 13:28:49.740834 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54fd9dc4b5-tszdx_0718fce0-14e8-434b-be98-ef48ec6059f3/kube-rbac-proxy/0.log" Dec 16 13:28:49 crc kubenswrapper[4805]: I1216 13:28:49.820442 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54fd9dc4b5-tszdx_0718fce0-14e8-434b-be98-ef48ec6059f3/manager/0.log" Dec 16 13:28:49 crc kubenswrapper[4805]: I1216 13:28:49.915910 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7f764db9b-hq9vm_ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96/kube-rbac-proxy/0.log" Dec 16 13:28:50 crc kubenswrapper[4805]: I1216 13:28:50.112999 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7f764db9b-hq9vm_ca0e9c4d-e3dc-4c3f-8451-8d0258ce9e96/manager/0.log" Dec 16 13:28:50 crc kubenswrapper[4805]: I1216 13:28:50.149934 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7cc599445b-b76nx_0e8bdc0b-046a-4513-9ed6-3350f94faea5/kube-rbac-proxy/0.log" Dec 16 13:28:50 crc kubenswrapper[4805]: I1216 13:28:50.223572 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7cc599445b-b76nx_0e8bdc0b-046a-4513-9ed6-3350f94faea5/manager/0.log" Dec 16 13:28:50 crc kubenswrapper[4805]: I1216 13:28:50.342858 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-xhhsn_93f2c029-57dc-47fc-9c2e-18f2710ff53e/kube-rbac-proxy/0.log" Dec 16 13:28:50 crc kubenswrapper[4805]: I1216 13:28:50.446869 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-xhhsn_93f2c029-57dc-47fc-9c2e-18f2710ff53e/manager/0.log" Dec 16 13:28:50 crc kubenswrapper[4805]: I1216 13:28:50.557258 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-58879495c-pqrdf_57bf5f89-e14d-442f-8064-2c0ca66139c4/kube-rbac-proxy/0.log" Dec 16 13:28:50 crc kubenswrapper[4805]: I1216 13:28:50.644374 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-58879495c-pqrdf_57bf5f89-e14d-442f-8064-2c0ca66139c4/manager/0.log" Dec 16 13:28:50 crc kubenswrapper[4805]: I1216 13:28:50.681497 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b444986fd-5mnhx_b885ab69-dc83-439c-9040-09fc3d238093/kube-rbac-proxy/0.log" Dec 16 13:28:50 crc kubenswrapper[4805]: I1216 13:28:50.901114 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b444986fd-5mnhx_b885ab69-dc83-439c-9040-09fc3d238093/manager/0.log" Dec 16 13:28:50 crc kubenswrapper[4805]: I1216 13:28:50.939439 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-d5fb87cb8-p6wbs_df737798-a34c-4142-88c2-592096b02f85/kube-rbac-proxy/0.log" Dec 16 13:28:50 crc kubenswrapper[4805]: I1216 13:28:50.995507 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-d5fb87cb8-p6wbs_df737798-a34c-4142-88c2-592096b02f85/manager/0.log" Dec 16 13:28:51 crc kubenswrapper[4805]: I1216 13:28:51.151071 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk_3b82dc59-a470-4665-8271-3bbcfecb73f1/kube-rbac-proxy/0.log" Dec 16 13:28:51 crc kubenswrapper[4805]: I1216 13:28:51.178684 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cbb4f75bd2gpnk_3b82dc59-a470-4665-8271-3bbcfecb73f1/manager/0.log" Dec 16 13:28:51 crc kubenswrapper[4805]: I1216 13:28:51.385571 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54798f4d5-64lpb_cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b/kube-rbac-proxy/0.log" Dec 16 13:28:51 crc kubenswrapper[4805]: I1216 13:28:51.524165 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-777d8df86-zk62z_990d476a-8cea-4e1a-8e5f-10fa313d23cb/kube-rbac-proxy/0.log" Dec 16 13:28:51 crc kubenswrapper[4805]: I1216 13:28:51.911809 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-777d8df86-zk62z_990d476a-8cea-4e1a-8e5f-10fa313d23cb/operator/0.log" Dec 16 13:28:51 crc kubenswrapper[4805]: I1216 13:28:51.965756 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4bhsq_42823d10-65ec-407c-93a4-98d27954a5f3/registry-server/0.log" Dec 16 13:28:52 crc kubenswrapper[4805]: I1216 13:28:52.296537 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-l26rt_5581487f-dd20-4fb5-99b7-c6cfb197e548/kube-rbac-proxy/0.log" Dec 16 13:28:52 crc kubenswrapper[4805]: I1216 13:28:52.473746 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-l26rt_5581487f-dd20-4fb5-99b7-c6cfb197e548/manager/0.log" Dec 16 13:28:52 crc kubenswrapper[4805]: I1216 13:28:52.553924 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54798f4d5-64lpb_cc49fa1c-3dc7-4551-ae01-1b1d44d1c53b/manager/0.log" Dec 16 13:28:52 crc kubenswrapper[4805]: I1216 13:28:52.658545 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-cc776f956-smg8x_9b3aad50-49b1-43c0-84c9-15368e69abae/manager/0.log" Dec 16 13:28:52 crc kubenswrapper[4805]: I1216 13:28:52.780171 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-cc776f956-smg8x_9b3aad50-49b1-43c0-84c9-15368e69abae/kube-rbac-proxy/0.log" Dec 16 13:28:52 crc kubenswrapper[4805]: I1216 13:28:52.900849 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-9ppfn_418e4014-2b81-4b93-a665-ca28d1e1d7ee/operator/0.log" Dec 16 13:28:52 crc kubenswrapper[4805]: I1216 13:28:52.956366 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7c9ff8845d-swkll_bbd2ad8a-7239-4e25-bfbd-a009e826a337/kube-rbac-proxy/0.log" Dec 16 13:28:53 crc kubenswrapper[4805]: I1216 13:28:53.003370 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7c9ff8845d-swkll_bbd2ad8a-7239-4e25-bfbd-a009e826a337/manager/0.log" Dec 16 13:28:53 crc kubenswrapper[4805]: I1216 13:28:53.137283 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6bc5b9c47-tdd6s_f5b07707-f2f4-4664-9522-268f8ee833db/kube-rbac-proxy/0.log" Dec 16 13:28:53 crc kubenswrapper[4805]: I1216 13:28:53.165480 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6bc5b9c47-tdd6s_f5b07707-f2f4-4664-9522-268f8ee833db/manager/0.log" Dec 16 13:28:53 crc kubenswrapper[4805]: I1216 13:28:53.257196 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5d79c6465c-nldwq_68475cd9-8ddd-44c5-ae7e-446bc92bb188/kube-rbac-proxy/0.log" Dec 16 13:28:53 crc kubenswrapper[4805]: I1216 13:28:53.313344 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5d79c6465c-nldwq_68475cd9-8ddd-44c5-ae7e-446bc92bb188/manager/0.log" Dec 16 13:28:53 crc kubenswrapper[4805]: I1216 13:28:53.392031 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-658bc5c8c5-wlr8s_39e02343-d3e2-4e57-b38e-1b275f3cb29d/kube-rbac-proxy/0.log" Dec 16 13:28:53 crc kubenswrapper[4805]: I1216 13:28:53.454090 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-658bc5c8c5-wlr8s_39e02343-d3e2-4e57-b38e-1b275f3cb29d/manager/0.log" Dec 16 13:29:11 crc kubenswrapper[4805]: I1216 13:29:11.688255 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9jccm_df2558d5-6ce0-4fb0-b689-fc8682a89744/control-plane-machine-set-operator/0.log" Dec 16 13:29:11 crc kubenswrapper[4805]: I1216 13:29:11.878857 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q7lmj_9a656193-7884-4e3d-8a17-4ff680c4a116/machine-api-operator/0.log" Dec 16 13:29:11 crc kubenswrapper[4805]: I1216 13:29:11.907763 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q7lmj_9a656193-7884-4e3d-8a17-4ff680c4a116/kube-rbac-proxy/0.log" Dec 16 13:29:24 crc kubenswrapper[4805]: I1216 13:29:24.979810 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-6wgm7_31df596c-e28f-4424-8e69-09cadc77cd6d/cert-manager-controller/0.log" Dec 16 13:29:25 crc kubenswrapper[4805]: I1216 13:29:25.125685 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-mbkms_5a731837-c343-49f3-8bd9-26b04af9b2d0/cert-manager-cainjector/0.log" Dec 16 13:29:25 crc kubenswrapper[4805]: I1216 13:29:25.208220 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-9m9jg_dd1457ce-6da4-4b68-9bb5-7c57738c0ace/cert-manager-webhook/0.log" Dec 16 13:29:41 crc kubenswrapper[4805]: I1216 13:29:41.035063 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-8v4hk_8fbc1c7a-286e-4428-a745-32211779781e/nmstate-console-plugin/0.log" Dec 16 13:29:41 crc kubenswrapper[4805]: I1216 13:29:41.422126 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lxsrn_ff660e50-710f-494e-aa58-66abf3868df5/nmstate-handler/0.log" Dec 16 13:29:41 crc kubenswrapper[4805]: I1216 13:29:41.474998 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-z2kdh_31cb3421-4893-45b4-bb8d-8afd77fe9cb2/kube-rbac-proxy/0.log" Dec 16 13:29:41 crc kubenswrapper[4805]: I1216 13:29:41.546730 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-z2kdh_31cb3421-4893-45b4-bb8d-8afd77fe9cb2/nmstate-metrics/0.log" Dec 16 13:29:41 crc kubenswrapper[4805]: I1216 13:29:41.698976 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-dvv9t_4bc64030-6cd9-48cb-8665-3424d3f6897c/nmstate-operator/0.log" Dec 16 13:29:41 crc kubenswrapper[4805]: I1216 13:29:41.808341 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-m8h9v_212a3b56-a221-4818-a463-90cc9e4e46e5/nmstate-webhook/0.log" Dec 16 13:29:57 crc kubenswrapper[4805]: I1216 13:29:57.072100 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:29:57 crc kubenswrapper[4805]: I1216 13:29:57.072710 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:29:59 crc kubenswrapper[4805]: I1216 13:29:59.740892 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-mw2xw_601c7e3b-b663-4780-bb6a-59bc7e4d510d/kube-rbac-proxy/0.log" Dec 16 13:29:59 crc kubenswrapper[4805]: I1216 13:29:59.751820 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-mw2xw_601c7e3b-b663-4780-bb6a-59bc7e4d510d/controller/0.log" Dec 16 13:29:59 crc kubenswrapper[4805]: I1216 13:29:59.982711 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-frr-files/0.log" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.156741 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62"] Dec 16 13:30:00 crc kubenswrapper[4805]: E1216 13:30:00.157309 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be06ea4a-900e-42b5-b288-8769d436b5ab" containerName="container-00" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.157334 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="be06ea4a-900e-42b5-b288-8769d436b5ab" containerName="container-00" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.157659 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="be06ea4a-900e-42b5-b288-8769d436b5ab" containerName="container-00" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.158776 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.161569 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.161938 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.182014 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62"] Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.200114 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855460b5-c536-43d1-8142-1290647ea084-secret-volume\") pod \"collect-profiles-29431530-bdt62\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.200238 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855460b5-c536-43d1-8142-1290647ea084-config-volume\") pod \"collect-profiles-29431530-bdt62\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.200289 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b682w\" (UniqueName: \"kubernetes.io/projected/855460b5-c536-43d1-8142-1290647ea084-kube-api-access-b682w\") pod \"collect-profiles-29431530-bdt62\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.289310 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-reloader/0.log" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.302086 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855460b5-c536-43d1-8142-1290647ea084-secret-volume\") pod \"collect-profiles-29431530-bdt62\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.302182 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855460b5-c536-43d1-8142-1290647ea084-config-volume\") pod \"collect-profiles-29431530-bdt62\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.302221 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b682w\" (UniqueName: \"kubernetes.io/projected/855460b5-c536-43d1-8142-1290647ea084-kube-api-access-b682w\") pod \"collect-profiles-29431530-bdt62\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.304453 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855460b5-c536-43d1-8142-1290647ea084-config-volume\") pod \"collect-profiles-29431530-bdt62\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.326794 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855460b5-c536-43d1-8142-1290647ea084-secret-volume\") pod \"collect-profiles-29431530-bdt62\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.330241 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b682w\" (UniqueName: \"kubernetes.io/projected/855460b5-c536-43d1-8142-1290647ea084-kube-api-access-b682w\") pod \"collect-profiles-29431530-bdt62\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.404827 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-reloader/0.log" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.413415 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-frr-files/0.log" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.461520 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-metrics/0.log" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.487511 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.745806 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-metrics/0.log" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.815409 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-reloader/0.log" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.850111 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-frr-files/0.log" Dec 16 13:30:00 crc kubenswrapper[4805]: I1216 13:30:00.918282 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-metrics/0.log" Dec 16 13:30:01 crc kubenswrapper[4805]: I1216 13:30:01.136749 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62"] Dec 16 13:30:01 crc kubenswrapper[4805]: I1216 13:30:01.178968 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-reloader/0.log" Dec 16 13:30:01 crc kubenswrapper[4805]: I1216 13:30:01.229206 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-frr-files/0.log" Dec 16 13:30:01 crc kubenswrapper[4805]: I1216 13:30:01.268040 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/cp-metrics/0.log" Dec 16 13:30:01 crc kubenswrapper[4805]: I1216 13:30:01.271412 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/controller/0.log" Dec 16 13:30:01 crc kubenswrapper[4805]: I1216 13:30:01.566376 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/kube-rbac-proxy-frr/0.log" Dec 16 13:30:01 crc kubenswrapper[4805]: I1216 13:30:01.697046 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/kube-rbac-proxy/0.log" Dec 16 13:30:01 crc kubenswrapper[4805]: I1216 13:30:01.742246 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" event={"ID":"855460b5-c536-43d1-8142-1290647ea084","Type":"ContainerStarted","Data":"5ca06f94f155cb625b400a0231048be3f2fcd4bf79ec8d4ee64ce0bb878f84dd"} Dec 16 13:30:01 crc kubenswrapper[4805]: I1216 13:30:01.742298 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" event={"ID":"855460b5-c536-43d1-8142-1290647ea084","Type":"ContainerStarted","Data":"cfbc2c5d71e3340c28d06e619fe11ac4a99f5e54452157a116641fd5efb0360e"} Dec 16 13:30:01 crc kubenswrapper[4805]: I1216 13:30:01.769713 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" podStartSLOduration=1.769646198 podStartE2EDuration="1.769646198s" podCreationTimestamp="2025-12-16 13:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:30:01.76727536 +0000 UTC m=+5675.485533165" watchObservedRunningTime="2025-12-16 13:30:01.769646198 +0000 UTC m=+5675.487904023" Dec 16 13:30:01 crc kubenswrapper[4805]: I1216 13:30:01.827117 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/frr-metrics/0.log" Dec 16 13:30:02 crc kubenswrapper[4805]: I1216 13:30:02.050818 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/reloader/0.log" Dec 16 13:30:02 crc kubenswrapper[4805]: E1216 13:30:02.148737 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod855460b5_c536_43d1_8142_1290647ea084.slice/crio-conmon-5ca06f94f155cb625b400a0231048be3f2fcd4bf79ec8d4ee64ce0bb878f84dd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod855460b5_c536_43d1_8142_1290647ea084.slice/crio-5ca06f94f155cb625b400a0231048be3f2fcd4bf79ec8d4ee64ce0bb878f84dd.scope\": RecentStats: unable to find data in memory cache]" Dec 16 13:30:02 crc kubenswrapper[4805]: I1216 13:30:02.284773 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-zqndj_a7792c7c-66d6-4c58-ba5f-09ddbb883c20/frr-k8s-webhook-server/0.log" Dec 16 13:30:02 crc kubenswrapper[4805]: I1216 13:30:02.510022 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-596ddb8dd6-lh97f_e7312814-4835-454a-b7c4-5036ce21ef36/manager/0.log" Dec 16 13:30:02 crc kubenswrapper[4805]: I1216 13:30:02.753545 4805 generic.go:334] "Generic (PLEG): container finished" podID="855460b5-c536-43d1-8142-1290647ea084" containerID="5ca06f94f155cb625b400a0231048be3f2fcd4bf79ec8d4ee64ce0bb878f84dd" exitCode=0 Dec 16 13:30:02 crc kubenswrapper[4805]: I1216 13:30:02.753605 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" event={"ID":"855460b5-c536-43d1-8142-1290647ea084","Type":"ContainerDied","Data":"5ca06f94f155cb625b400a0231048be3f2fcd4bf79ec8d4ee64ce0bb878f84dd"} Dec 16 13:30:02 crc kubenswrapper[4805]: I1216 13:30:02.856252 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-74df5c45cc-dv7cj_1f6983ab-d0c5-4431-878f-86c4d91d6720/webhook-server/0.log" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.116920 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cq86h_c0c0377d-ee53-45d7-87be-8f5ba37280b3/frr/0.log" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.129269 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zjknk_89665fca-66d5-4ff5-98d9-e49065febb40/kube-rbac-proxy/0.log" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.473744 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zjknk_89665fca-66d5-4ff5-98d9-e49065febb40/speaker/0.log" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.720984 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jwl8g"] Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.722986 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.744738 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jwl8g"] Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.813380 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-utilities\") pod \"community-operators-jwl8g\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.813524 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-catalog-content\") pod \"community-operators-jwl8g\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.813576 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nzxp\" (UniqueName: \"kubernetes.io/projected/204861c1-aa2e-464d-bae5-e21a51f7ee1c-kube-api-access-6nzxp\") pod \"community-operators-jwl8g\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.915627 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-utilities\") pod \"community-operators-jwl8g\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.916030 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-catalog-content\") pod \"community-operators-jwl8g\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.916087 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nzxp\" (UniqueName: \"kubernetes.io/projected/204861c1-aa2e-464d-bae5-e21a51f7ee1c-kube-api-access-6nzxp\") pod \"community-operators-jwl8g\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.916382 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-utilities\") pod \"community-operators-jwl8g\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.916719 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-catalog-content\") pod \"community-operators-jwl8g\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:03 crc kubenswrapper[4805]: I1216 13:30:03.965125 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nzxp\" (UniqueName: \"kubernetes.io/projected/204861c1-aa2e-464d-bae5-e21a51f7ee1c-kube-api-access-6nzxp\") pod \"community-operators-jwl8g\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.044599 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.276262 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.322763 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vjvmf"] Dec 16 13:30:04 crc kubenswrapper[4805]: E1216 13:30:04.330209 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855460b5-c536-43d1-8142-1290647ea084" containerName="collect-profiles" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.330234 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="855460b5-c536-43d1-8142-1290647ea084" containerName="collect-profiles" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.330547 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="855460b5-c536-43d1-8142-1290647ea084" containerName="collect-profiles" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.332033 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.334926 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855460b5-c536-43d1-8142-1290647ea084-config-volume\") pod \"855460b5-c536-43d1-8142-1290647ea084\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.335029 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b682w\" (UniqueName: \"kubernetes.io/projected/855460b5-c536-43d1-8142-1290647ea084-kube-api-access-b682w\") pod \"855460b5-c536-43d1-8142-1290647ea084\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.335056 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855460b5-c536-43d1-8142-1290647ea084-secret-volume\") pod \"855460b5-c536-43d1-8142-1290647ea084\" (UID: \"855460b5-c536-43d1-8142-1290647ea084\") " Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.371790 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855460b5-c536-43d1-8142-1290647ea084-kube-api-access-b682w" (OuterVolumeSpecName: "kube-api-access-b682w") pod "855460b5-c536-43d1-8142-1290647ea084" (UID: "855460b5-c536-43d1-8142-1290647ea084"). InnerVolumeSpecName "kube-api-access-b682w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.372094 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/855460b5-c536-43d1-8142-1290647ea084-config-volume" (OuterVolumeSpecName: "config-volume") pod "855460b5-c536-43d1-8142-1290647ea084" (UID: "855460b5-c536-43d1-8142-1290647ea084"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.375904 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjvmf"] Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.380826 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855460b5-c536-43d1-8142-1290647ea084-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "855460b5-c536-43d1-8142-1290647ea084" (UID: "855460b5-c536-43d1-8142-1290647ea084"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.440223 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49c2j\" (UniqueName: \"kubernetes.io/projected/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-kube-api-access-49c2j\") pod \"redhat-marketplace-vjvmf\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.440615 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-utilities\") pod \"redhat-marketplace-vjvmf\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.440752 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-catalog-content\") pod \"redhat-marketplace-vjvmf\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.440892 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855460b5-c536-43d1-8142-1290647ea084-config-volume\") on node \"crc\" DevicePath \"\"" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.440957 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b682w\" (UniqueName: \"kubernetes.io/projected/855460b5-c536-43d1-8142-1290647ea084-kube-api-access-b682w\") on node \"crc\" DevicePath \"\"" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.441020 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855460b5-c536-43d1-8142-1290647ea084-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.547327 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-utilities\") pod \"redhat-marketplace-vjvmf\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.547384 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-catalog-content\") pod \"redhat-marketplace-vjvmf\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.547432 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49c2j\" (UniqueName: \"kubernetes.io/projected/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-kube-api-access-49c2j\") pod \"redhat-marketplace-vjvmf\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.548087 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-utilities\") pod \"redhat-marketplace-vjvmf\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.548309 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-catalog-content\") pod \"redhat-marketplace-vjvmf\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.582823 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49c2j\" (UniqueName: \"kubernetes.io/projected/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-kube-api-access-49c2j\") pod \"redhat-marketplace-vjvmf\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:04 crc kubenswrapper[4805]: W1216 13:30:04.772452 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod204861c1_aa2e_464d_bae5_e21a51f7ee1c.slice/crio-8a4c584e6a8a63ccda3460ad012f8b7ac3adb671f557e638a003332e4cd639be WatchSource:0}: Error finding container 8a4c584e6a8a63ccda3460ad012f8b7ac3adb671f557e638a003332e4cd639be: Status 404 returned error can't find the container with id 8a4c584e6a8a63ccda3460ad012f8b7ac3adb671f557e638a003332e4cd639be Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.777248 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jwl8g"] Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.792251 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwl8g" event={"ID":"204861c1-aa2e-464d-bae5-e21a51f7ee1c","Type":"ContainerStarted","Data":"8a4c584e6a8a63ccda3460ad012f8b7ac3adb671f557e638a003332e4cd639be"} Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.794759 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" event={"ID":"855460b5-c536-43d1-8142-1290647ea084","Type":"ContainerDied","Data":"cfbc2c5d71e3340c28d06e619fe11ac4a99f5e54452157a116641fd5efb0360e"} Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.794793 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfbc2c5d71e3340c28d06e619fe11ac4a99f5e54452157a116641fd5efb0360e" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.794845 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29431530-bdt62" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.803623 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.893452 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd"] Dec 16 13:30:04 crc kubenswrapper[4805]: I1216 13:30:04.910406 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29431485-lqgtd"] Dec 16 13:30:05 crc kubenswrapper[4805]: I1216 13:30:05.392320 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjvmf"] Dec 16 13:30:05 crc kubenswrapper[4805]: I1216 13:30:05.807969 4805 generic.go:334] "Generic (PLEG): container finished" podID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" containerID="6b5171f16a6f10888be28199b9c1f318cefc7c903d5fe059240276e67ab173e1" exitCode=0 Dec 16 13:30:05 crc kubenswrapper[4805]: I1216 13:30:05.808112 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjvmf" event={"ID":"726d2d40-a173-4cfa-8a58-160f3ef1bf5c","Type":"ContainerDied","Data":"6b5171f16a6f10888be28199b9c1f318cefc7c903d5fe059240276e67ab173e1"} Dec 16 13:30:05 crc kubenswrapper[4805]: I1216 13:30:05.809504 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjvmf" event={"ID":"726d2d40-a173-4cfa-8a58-160f3ef1bf5c","Type":"ContainerStarted","Data":"710b5dce56fcd1bc84a3bbbd0f567068eed969b6d185d2e7da3a984e3b30687b"} Dec 16 13:30:05 crc kubenswrapper[4805]: I1216 13:30:05.815345 4805 generic.go:334] "Generic (PLEG): container finished" podID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" containerID="b82b55e713483395d2eb8ab75ec9a6a229e7749cab6303a365faebe8c3d5d38a" exitCode=0 Dec 16 13:30:05 crc kubenswrapper[4805]: I1216 13:30:05.815394 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwl8g" event={"ID":"204861c1-aa2e-464d-bae5-e21a51f7ee1c","Type":"ContainerDied","Data":"b82b55e713483395d2eb8ab75ec9a6a229e7749cab6303a365faebe8c3d5d38a"} Dec 16 13:30:06 crc kubenswrapper[4805]: I1216 13:30:06.538590 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37553067-4bc6-46a6-bd7d-072105c8f46b" path="/var/lib/kubelet/pods/37553067-4bc6-46a6-bd7d-072105c8f46b/volumes" Dec 16 13:30:07 crc kubenswrapper[4805]: I1216 13:30:07.834162 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjvmf" event={"ID":"726d2d40-a173-4cfa-8a58-160f3ef1bf5c","Type":"ContainerStarted","Data":"3e1f48a942a230796fe2dac7f332e3767fce167e5bdf800ad30c61f908644d9a"} Dec 16 13:30:07 crc kubenswrapper[4805]: I1216 13:30:07.838970 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwl8g" event={"ID":"204861c1-aa2e-464d-bae5-e21a51f7ee1c","Type":"ContainerStarted","Data":"f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2"} Dec 16 13:30:08 crc kubenswrapper[4805]: I1216 13:30:08.849333 4805 generic.go:334] "Generic (PLEG): container finished" podID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" containerID="f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2" exitCode=0 Dec 16 13:30:08 crc kubenswrapper[4805]: I1216 13:30:08.849437 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwl8g" event={"ID":"204861c1-aa2e-464d-bae5-e21a51f7ee1c","Type":"ContainerDied","Data":"f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2"} Dec 16 13:30:08 crc kubenswrapper[4805]: I1216 13:30:08.852888 4805 generic.go:334] "Generic (PLEG): container finished" podID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" containerID="3e1f48a942a230796fe2dac7f332e3767fce167e5bdf800ad30c61f908644d9a" exitCode=0 Dec 16 13:30:08 crc kubenswrapper[4805]: I1216 13:30:08.852927 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjvmf" event={"ID":"726d2d40-a173-4cfa-8a58-160f3ef1bf5c","Type":"ContainerDied","Data":"3e1f48a942a230796fe2dac7f332e3767fce167e5bdf800ad30c61f908644d9a"} Dec 16 13:30:09 crc kubenswrapper[4805]: I1216 13:30:09.864830 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwl8g" event={"ID":"204861c1-aa2e-464d-bae5-e21a51f7ee1c","Type":"ContainerStarted","Data":"7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133"} Dec 16 13:30:09 crc kubenswrapper[4805]: I1216 13:30:09.890037 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jwl8g" podStartSLOduration=3.418505578 podStartE2EDuration="6.890008884s" podCreationTimestamp="2025-12-16 13:30:03 +0000 UTC" firstStartedPulling="2025-12-16 13:30:05.817774096 +0000 UTC m=+5679.536031901" lastFinishedPulling="2025-12-16 13:30:09.289277402 +0000 UTC m=+5683.007535207" observedRunningTime="2025-12-16 13:30:09.883354134 +0000 UTC m=+5683.601611959" watchObservedRunningTime="2025-12-16 13:30:09.890008884 +0000 UTC m=+5683.608266699" Dec 16 13:30:10 crc kubenswrapper[4805]: I1216 13:30:10.875599 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjvmf" event={"ID":"726d2d40-a173-4cfa-8a58-160f3ef1bf5c","Type":"ContainerStarted","Data":"0eda3f4406fb0f37ad4425b7ede6162262448a5e23c105c3cb3b8bb9acb17e05"} Dec 16 13:30:10 crc kubenswrapper[4805]: I1216 13:30:10.899254 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vjvmf" podStartSLOduration=2.77155569 podStartE2EDuration="6.899223238s" podCreationTimestamp="2025-12-16 13:30:04 +0000 UTC" firstStartedPulling="2025-12-16 13:30:05.811443544 +0000 UTC m=+5679.529701349" lastFinishedPulling="2025-12-16 13:30:09.939111082 +0000 UTC m=+5683.657368897" observedRunningTime="2025-12-16 13:30:10.895386078 +0000 UTC m=+5684.613643883" watchObservedRunningTime="2025-12-16 13:30:10.899223238 +0000 UTC m=+5684.617481063" Dec 16 13:30:14 crc kubenswrapper[4805]: I1216 13:30:14.046888 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:14 crc kubenswrapper[4805]: I1216 13:30:14.047462 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:14 crc kubenswrapper[4805]: I1216 13:30:14.104401 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:14 crc kubenswrapper[4805]: I1216 13:30:14.804984 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:14 crc kubenswrapper[4805]: I1216 13:30:14.805105 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:14 crc kubenswrapper[4805]: I1216 13:30:14.851775 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:14 crc kubenswrapper[4805]: I1216 13:30:14.979013 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:15 crc kubenswrapper[4805]: I1216 13:30:15.696060 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jwl8g"] Dec 16 13:30:16 crc kubenswrapper[4805]: I1216 13:30:16.936281 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jwl8g" podUID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" containerName="registry-server" containerID="cri-o://7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133" gracePeriod=2 Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.476893 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.628174 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-catalog-content\") pod \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.628538 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nzxp\" (UniqueName: \"kubernetes.io/projected/204861c1-aa2e-464d-bae5-e21a51f7ee1c-kube-api-access-6nzxp\") pod \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.628583 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-utilities\") pod \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\" (UID: \"204861c1-aa2e-464d-bae5-e21a51f7ee1c\") " Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.629615 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-utilities" (OuterVolumeSpecName: "utilities") pod "204861c1-aa2e-464d-bae5-e21a51f7ee1c" (UID: "204861c1-aa2e-464d-bae5-e21a51f7ee1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.642992 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/204861c1-aa2e-464d-bae5-e21a51f7ee1c-kube-api-access-6nzxp" (OuterVolumeSpecName: "kube-api-access-6nzxp") pod "204861c1-aa2e-464d-bae5-e21a51f7ee1c" (UID: "204861c1-aa2e-464d-bae5-e21a51f7ee1c"). InnerVolumeSpecName "kube-api-access-6nzxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.691172 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "204861c1-aa2e-464d-bae5-e21a51f7ee1c" (UID: "204861c1-aa2e-464d-bae5-e21a51f7ee1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.731015 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.731058 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nzxp\" (UniqueName: \"kubernetes.io/projected/204861c1-aa2e-464d-bae5-e21a51f7ee1c-kube-api-access-6nzxp\") on node \"crc\" DevicePath \"\"" Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.731074 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/204861c1-aa2e-464d-bae5-e21a51f7ee1c-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.946745 4805 generic.go:334] "Generic (PLEG): container finished" podID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" containerID="7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133" exitCode=0 Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.946801 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwl8g" event={"ID":"204861c1-aa2e-464d-bae5-e21a51f7ee1c","Type":"ContainerDied","Data":"7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133"} Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.946840 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwl8g" event={"ID":"204861c1-aa2e-464d-bae5-e21a51f7ee1c","Type":"ContainerDied","Data":"8a4c584e6a8a63ccda3460ad012f8b7ac3adb671f557e638a003332e4cd639be"} Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.946861 4805 scope.go:117] "RemoveContainer" containerID="7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133" Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.946856 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwl8g" Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.973420 4805 scope.go:117] "RemoveContainer" containerID="f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2" Dec 16 13:30:17 crc kubenswrapper[4805]: I1216 13:30:17.987261 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jwl8g"] Dec 16 13:30:18 crc kubenswrapper[4805]: I1216 13:30:18.008390 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jwl8g"] Dec 16 13:30:18 crc kubenswrapper[4805]: I1216 13:30:18.013404 4805 scope.go:117] "RemoveContainer" containerID="b82b55e713483395d2eb8ab75ec9a6a229e7749cab6303a365faebe8c3d5d38a" Dec 16 13:30:18 crc kubenswrapper[4805]: I1216 13:30:18.049181 4805 scope.go:117] "RemoveContainer" containerID="7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133" Dec 16 13:30:18 crc kubenswrapper[4805]: E1216 13:30:18.063978 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133\": container with ID starting with 7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133 not found: ID does not exist" containerID="7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133" Dec 16 13:30:18 crc kubenswrapper[4805]: I1216 13:30:18.064028 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133"} err="failed to get container status \"7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133\": rpc error: code = NotFound desc = could not find container \"7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133\": container with ID starting with 7e2aa367d197673eec7d5d3f3fdeb637d0f7a86ce2cffdb1e8fdc21891fd3133 not found: ID does not exist" Dec 16 13:30:18 crc kubenswrapper[4805]: I1216 13:30:18.064056 4805 scope.go:117] "RemoveContainer" containerID="f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2" Dec 16 13:30:18 crc kubenswrapper[4805]: E1216 13:30:18.064572 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2\": container with ID starting with f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2 not found: ID does not exist" containerID="f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2" Dec 16 13:30:18 crc kubenswrapper[4805]: I1216 13:30:18.064597 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2"} err="failed to get container status \"f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2\": rpc error: code = NotFound desc = could not find container \"f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2\": container with ID starting with f4797853db31df7c80ac68f44fc6fc170586b8d6cd6b1074ef87a1585ab4e9c2 not found: ID does not exist" Dec 16 13:30:18 crc kubenswrapper[4805]: I1216 13:30:18.064613 4805 scope.go:117] "RemoveContainer" containerID="b82b55e713483395d2eb8ab75ec9a6a229e7749cab6303a365faebe8c3d5d38a" Dec 16 13:30:18 crc kubenswrapper[4805]: E1216 13:30:18.066481 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82b55e713483395d2eb8ab75ec9a6a229e7749cab6303a365faebe8c3d5d38a\": container with ID starting with b82b55e713483395d2eb8ab75ec9a6a229e7749cab6303a365faebe8c3d5d38a not found: ID does not exist" containerID="b82b55e713483395d2eb8ab75ec9a6a229e7749cab6303a365faebe8c3d5d38a" Dec 16 13:30:18 crc kubenswrapper[4805]: I1216 13:30:18.066544 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82b55e713483395d2eb8ab75ec9a6a229e7749cab6303a365faebe8c3d5d38a"} err="failed to get container status \"b82b55e713483395d2eb8ab75ec9a6a229e7749cab6303a365faebe8c3d5d38a\": rpc error: code = NotFound desc = could not find container \"b82b55e713483395d2eb8ab75ec9a6a229e7749cab6303a365faebe8c3d5d38a\": container with ID starting with b82b55e713483395d2eb8ab75ec9a6a229e7749cab6303a365faebe8c3d5d38a not found: ID does not exist" Dec 16 13:30:18 crc kubenswrapper[4805]: I1216 13:30:18.535472 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" path="/var/lib/kubelet/pods/204861c1-aa2e-464d-bae5-e21a51f7ee1c/volumes" Dec 16 13:30:21 crc kubenswrapper[4805]: I1216 13:30:21.679494 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/util/0.log" Dec 16 13:30:21 crc kubenswrapper[4805]: I1216 13:30:21.950193 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/pull/0.log" Dec 16 13:30:22 crc kubenswrapper[4805]: I1216 13:30:22.015126 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/util/0.log" Dec 16 13:30:22 crc kubenswrapper[4805]: I1216 13:30:22.046856 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/pull/0.log" Dec 16 13:30:22 crc kubenswrapper[4805]: I1216 13:30:22.214336 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/pull/0.log" Dec 16 13:30:22 crc kubenswrapper[4805]: I1216 13:30:22.227915 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/util/0.log" Dec 16 13:30:22 crc kubenswrapper[4805]: I1216 13:30:22.346292 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4q5nqx_fd931a62-7c95-4999-8a4f-5c57209ea44f/extract/0.log" Dec 16 13:30:22 crc kubenswrapper[4805]: I1216 13:30:22.440553 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/util/0.log" Dec 16 13:30:22 crc kubenswrapper[4805]: I1216 13:30:22.666887 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/pull/0.log" Dec 16 13:30:22 crc kubenswrapper[4805]: I1216 13:30:22.785233 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/util/0.log" Dec 16 13:30:22 crc kubenswrapper[4805]: I1216 13:30:22.821026 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/pull/0.log" Dec 16 13:30:22 crc kubenswrapper[4805]: I1216 13:30:22.995674 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/util/0.log" Dec 16 13:30:22 crc kubenswrapper[4805]: I1216 13:30:22.996423 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/pull/0.log" Dec 16 13:30:23 crc kubenswrapper[4805]: I1216 13:30:23.053168 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nkbct_67333641-ba72-4136-9667-27fe128bbd8f/extract/0.log" Dec 16 13:30:23 crc kubenswrapper[4805]: I1216 13:30:23.266819 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-utilities/0.log" Dec 16 13:30:23 crc kubenswrapper[4805]: I1216 13:30:23.520179 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-content/0.log" Dec 16 13:30:23 crc kubenswrapper[4805]: I1216 13:30:23.534684 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-utilities/0.log" Dec 16 13:30:23 crc kubenswrapper[4805]: I1216 13:30:23.546840 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-content/0.log" Dec 16 13:30:23 crc kubenswrapper[4805]: I1216 13:30:23.783565 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-content/0.log" Dec 16 13:30:23 crc kubenswrapper[4805]: I1216 13:30:23.876182 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/extract-utilities/0.log" Dec 16 13:30:24 crc kubenswrapper[4805]: I1216 13:30:24.173509 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-utilities/0.log" Dec 16 13:30:24 crc kubenswrapper[4805]: I1216 13:30:24.339726 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cqgcs_b2e2559b-85ef-43d4-8c14-4aa510a5132c/registry-server/0.log" Dec 16 13:30:24 crc kubenswrapper[4805]: I1216 13:30:24.500084 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-utilities/0.log" Dec 16 13:30:24 crc kubenswrapper[4805]: I1216 13:30:24.540993 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-content/0.log" Dec 16 13:30:24 crc kubenswrapper[4805]: I1216 13:30:24.596471 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-content/0.log" Dec 16 13:30:24 crc kubenswrapper[4805]: I1216 13:30:24.755177 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-utilities/0.log" Dec 16 13:30:24 crc kubenswrapper[4805]: I1216 13:30:24.800831 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/extract-content/0.log" Dec 16 13:30:24 crc kubenswrapper[4805]: I1216 13:30:24.879124 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:24 crc kubenswrapper[4805]: I1216 13:30:24.946397 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjvmf"] Dec 16 13:30:25 crc kubenswrapper[4805]: I1216 13:30:25.014825 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vjvmf" podUID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" containerName="registry-server" containerID="cri-o://0eda3f4406fb0f37ad4425b7ede6162262448a5e23c105c3cb3b8bb9acb17e05" gracePeriod=2 Dec 16 13:30:25 crc kubenswrapper[4805]: I1216 13:30:25.056283 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-g2w5p_ec1e6ca1-a29e-4572-8326-f4119b22b30a/marketplace-operator/0.log" Dec 16 13:30:25 crc kubenswrapper[4805]: I1216 13:30:25.316876 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-utilities/0.log" Dec 16 13:30:25 crc kubenswrapper[4805]: I1216 13:30:25.649643 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9w4vc_c98306a0-6a4d-4ca5-bdc8-85aa073e9fc5/registry-server/0.log" Dec 16 13:30:25 crc kubenswrapper[4805]: I1216 13:30:25.657860 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-content/0.log" Dec 16 13:30:25 crc kubenswrapper[4805]: I1216 13:30:25.727886 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-utilities/0.log" Dec 16 13:30:25 crc kubenswrapper[4805]: I1216 13:30:25.748679 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-content/0.log" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.051818 4805 generic.go:334] "Generic (PLEG): container finished" podID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" containerID="0eda3f4406fb0f37ad4425b7ede6162262448a5e23c105c3cb3b8bb9acb17e05" exitCode=0 Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.051865 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjvmf" event={"ID":"726d2d40-a173-4cfa-8a58-160f3ef1bf5c","Type":"ContainerDied","Data":"0eda3f4406fb0f37ad4425b7ede6162262448a5e23c105c3cb3b8bb9acb17e05"} Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.051895 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjvmf" event={"ID":"726d2d40-a173-4cfa-8a58-160f3ef1bf5c","Type":"ContainerDied","Data":"710b5dce56fcd1bc84a3bbbd0f567068eed969b6d185d2e7da3a984e3b30687b"} Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.051909 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="710b5dce56fcd1bc84a3bbbd0f567068eed969b6d185d2e7da3a984e3b30687b" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.126978 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.207458 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49c2j\" (UniqueName: \"kubernetes.io/projected/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-kube-api-access-49c2j\") pod \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.207614 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-catalog-content\") pod \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.208015 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-utilities\") pod \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\" (UID: \"726d2d40-a173-4cfa-8a58-160f3ef1bf5c\") " Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.211519 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-utilities" (OuterVolumeSpecName: "utilities") pod "726d2d40-a173-4cfa-8a58-160f3ef1bf5c" (UID: "726d2d40-a173-4cfa-8a58-160f3ef1bf5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.225927 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-kube-api-access-49c2j" (OuterVolumeSpecName: "kube-api-access-49c2j") pod "726d2d40-a173-4cfa-8a58-160f3ef1bf5c" (UID: "726d2d40-a173-4cfa-8a58-160f3ef1bf5c"). InnerVolumeSpecName "kube-api-access-49c2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.247093 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/registry-server/0.log" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.248804 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "726d2d40-a173-4cfa-8a58-160f3ef1bf5c" (UID: "726d2d40-a173-4cfa-8a58-160f3ef1bf5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.266819 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-utilities/0.log" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.290621 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7qdvb_5c69a7ab-dcf6-4c11-b1e4-faf7a390ebb1/extract-content/0.log" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.312297 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.312351 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49c2j\" (UniqueName: \"kubernetes.io/projected/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-kube-api-access-49c2j\") on node \"crc\" DevicePath \"\"" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.312365 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/726d2d40-a173-4cfa-8a58-160f3ef1bf5c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.451742 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vjvmf_726d2d40-a173-4cfa-8a58-160f3ef1bf5c/extract-utilities/0.log" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.635774 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vjvmf_726d2d40-a173-4cfa-8a58-160f3ef1bf5c/extract-utilities/0.log" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.672877 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vjvmf_726d2d40-a173-4cfa-8a58-160f3ef1bf5c/extract-content/0.log" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.695457 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vjvmf_726d2d40-a173-4cfa-8a58-160f3ef1bf5c/extract-content/0.log" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.961960 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vjvmf_726d2d40-a173-4cfa-8a58-160f3ef1bf5c/extract-content/0.log" Dec 16 13:30:26 crc kubenswrapper[4805]: I1216 13:30:26.966080 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vjvmf_726d2d40-a173-4cfa-8a58-160f3ef1bf5c/extract-utilities/0.log" Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.012721 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vjvmf_726d2d40-a173-4cfa-8a58-160f3ef1bf5c/registry-server/0.log" Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.059802 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjvmf" Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.071757 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.071825 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.088881 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjvmf"] Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.101149 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjvmf"] Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.220002 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-utilities/0.log" Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.387294 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-utilities/0.log" Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.421889 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-content/0.log" Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.442394 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-content/0.log" Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.649169 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-content/0.log" Dec 16 13:30:27 crc kubenswrapper[4805]: I1216 13:30:27.673814 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/extract-utilities/0.log" Dec 16 13:30:28 crc kubenswrapper[4805]: I1216 13:30:28.354403 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ptrmm_2eb513c8-933e-42ca-8720-a0fed194ea8d/registry-server/0.log" Dec 16 13:30:28 crc kubenswrapper[4805]: I1216 13:30:28.534937 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" path="/var/lib/kubelet/pods/726d2d40-a173-4cfa-8a58-160f3ef1bf5c/volumes" Dec 16 13:30:32 crc kubenswrapper[4805]: E1216 13:30:32.984035 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice/crio-710b5dce56fcd1bc84a3bbbd0f567068eed969b6d185d2e7da3a984e3b30687b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice\": RecentStats: unable to find data in memory cache]" Dec 16 13:30:43 crc kubenswrapper[4805]: E1216 13:30:43.265768 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice/crio-710b5dce56fcd1bc84a3bbbd0f567068eed969b6d185d2e7da3a984e3b30687b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice\": RecentStats: unable to find data in memory cache]" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.717663 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z2wt6"] Dec 16 13:30:50 crc kubenswrapper[4805]: E1216 13:30:50.719799 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" containerName="extract-utilities" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.719915 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" containerName="extract-utilities" Dec 16 13:30:50 crc kubenswrapper[4805]: E1216 13:30:50.720027 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" containerName="registry-server" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.720107 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" containerName="registry-server" Dec 16 13:30:50 crc kubenswrapper[4805]: E1216 13:30:50.720216 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" containerName="extract-content" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.720289 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" containerName="extract-content" Dec 16 13:30:50 crc kubenswrapper[4805]: E1216 13:30:50.720649 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" containerName="extract-utilities" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.720741 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" containerName="extract-utilities" Dec 16 13:30:50 crc kubenswrapper[4805]: E1216 13:30:50.720900 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" containerName="registry-server" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.720988 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" containerName="registry-server" Dec 16 13:30:50 crc kubenswrapper[4805]: E1216 13:30:50.721087 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" containerName="extract-content" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.721182 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" containerName="extract-content" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.721637 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="726d2d40-a173-4cfa-8a58-160f3ef1bf5c" containerName="registry-server" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.721748 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="204861c1-aa2e-464d-bae5-e21a51f7ee1c" containerName="registry-server" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.725831 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.746391 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z2wt6"] Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.747623 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-catalog-content\") pod \"certified-operators-z2wt6\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.747846 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-utilities\") pod \"certified-operators-z2wt6\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.748206 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsd4g\" (UniqueName: \"kubernetes.io/projected/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-kube-api-access-zsd4g\") pod \"certified-operators-z2wt6\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.850997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-catalog-content\") pod \"certified-operators-z2wt6\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.851072 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-utilities\") pod \"certified-operators-z2wt6\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.851362 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsd4g\" (UniqueName: \"kubernetes.io/projected/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-kube-api-access-zsd4g\") pod \"certified-operators-z2wt6\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.852550 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-catalog-content\") pod \"certified-operators-z2wt6\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.852697 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-utilities\") pod \"certified-operators-z2wt6\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:30:50 crc kubenswrapper[4805]: I1216 13:30:50.879519 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsd4g\" (UniqueName: \"kubernetes.io/projected/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-kube-api-access-zsd4g\") pod \"certified-operators-z2wt6\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:30:51 crc kubenswrapper[4805]: I1216 13:30:51.061163 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:30:51 crc kubenswrapper[4805]: I1216 13:30:51.846368 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z2wt6"] Dec 16 13:30:51 crc kubenswrapper[4805]: W1216 13:30:51.885772 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a794705_2f5d_4a29_a98f_6c4d1dab4b22.slice/crio-72621205463dc58482296255a33c8d2a6cfb2d10e1ed8ae760093a20a8097dfd WatchSource:0}: Error finding container 72621205463dc58482296255a33c8d2a6cfb2d10e1ed8ae760093a20a8097dfd: Status 404 returned error can't find the container with id 72621205463dc58482296255a33c8d2a6cfb2d10e1ed8ae760093a20a8097dfd Dec 16 13:30:52 crc kubenswrapper[4805]: I1216 13:30:52.429041 4805 generic.go:334] "Generic (PLEG): container finished" podID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" containerID="6735a799406d2b00c5d5df77f499cb617fd0383f1e41bb065ae3ad9df2d798b5" exitCode=0 Dec 16 13:30:52 crc kubenswrapper[4805]: I1216 13:30:52.429481 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wt6" event={"ID":"7a794705-2f5d-4a29-a98f-6c4d1dab4b22","Type":"ContainerDied","Data":"6735a799406d2b00c5d5df77f499cb617fd0383f1e41bb065ae3ad9df2d798b5"} Dec 16 13:30:52 crc kubenswrapper[4805]: I1216 13:30:52.429509 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wt6" event={"ID":"7a794705-2f5d-4a29-a98f-6c4d1dab4b22","Type":"ContainerStarted","Data":"72621205463dc58482296255a33c8d2a6cfb2d10e1ed8ae760093a20a8097dfd"} Dec 16 13:30:53 crc kubenswrapper[4805]: E1216 13:30:53.684936 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice/crio-710b5dce56fcd1bc84a3bbbd0f567068eed969b6d185d2e7da3a984e3b30687b\": RecentStats: unable to find data in memory cache]" Dec 16 13:30:54 crc kubenswrapper[4805]: I1216 13:30:54.475874 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wt6" event={"ID":"7a794705-2f5d-4a29-a98f-6c4d1dab4b22","Type":"ContainerStarted","Data":"4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9"} Dec 16 13:30:55 crc kubenswrapper[4805]: I1216 13:30:55.486224 4805 generic.go:334] "Generic (PLEG): container finished" podID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" containerID="4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9" exitCode=0 Dec 16 13:30:55 crc kubenswrapper[4805]: I1216 13:30:55.486290 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wt6" event={"ID":"7a794705-2f5d-4a29-a98f-6c4d1dab4b22","Type":"ContainerDied","Data":"4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9"} Dec 16 13:30:56 crc kubenswrapper[4805]: I1216 13:30:56.497762 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wt6" event={"ID":"7a794705-2f5d-4a29-a98f-6c4d1dab4b22","Type":"ContainerStarted","Data":"b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9"} Dec 16 13:30:56 crc kubenswrapper[4805]: I1216 13:30:56.525754 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z2wt6" podStartSLOduration=2.8999882660000003 podStartE2EDuration="6.525734004s" podCreationTimestamp="2025-12-16 13:30:50 +0000 UTC" firstStartedPulling="2025-12-16 13:30:52.431914546 +0000 UTC m=+5726.150172351" lastFinishedPulling="2025-12-16 13:30:56.057660284 +0000 UTC m=+5729.775918089" observedRunningTime="2025-12-16 13:30:56.517683813 +0000 UTC m=+5730.235941618" watchObservedRunningTime="2025-12-16 13:30:56.525734004 +0000 UTC m=+5730.243991829" Dec 16 13:30:57 crc kubenswrapper[4805]: I1216 13:30:57.071187 4805 patch_prober.go:28] interesting pod/machine-config-daemon-5gm98 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 16 13:30:57 crc kubenswrapper[4805]: I1216 13:30:57.071293 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 16 13:30:57 crc kubenswrapper[4805]: I1216 13:30:57.071348 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" Dec 16 13:30:57 crc kubenswrapper[4805]: I1216 13:30:57.072388 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924"} pod="openshift-machine-config-operator/machine-config-daemon-5gm98" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 16 13:30:57 crc kubenswrapper[4805]: I1216 13:30:57.072472 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerName="machine-config-daemon" containerID="cri-o://9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" gracePeriod=600 Dec 16 13:30:57 crc kubenswrapper[4805]: E1216 13:30:57.227015 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:30:57 crc kubenswrapper[4805]: I1216 13:30:57.528449 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" exitCode=0 Dec 16 13:30:57 crc kubenswrapper[4805]: I1216 13:30:57.528947 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerDied","Data":"9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924"} Dec 16 13:30:57 crc kubenswrapper[4805]: I1216 13:30:57.528990 4805 scope.go:117] "RemoveContainer" containerID="5e40366d476e3b90378575a27eb523bfc7438df121ddfab745128aced0f99f31" Dec 16 13:30:57 crc kubenswrapper[4805]: I1216 13:30:57.529543 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:30:57 crc kubenswrapper[4805]: E1216 13:30:57.529855 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:31:01 crc kubenswrapper[4805]: I1216 13:31:01.062619 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:31:01 crc kubenswrapper[4805]: I1216 13:31:01.062961 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:31:01 crc kubenswrapper[4805]: I1216 13:31:01.163013 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:31:01 crc kubenswrapper[4805]: I1216 13:31:01.672200 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:31:01 crc kubenswrapper[4805]: I1216 13:31:01.741114 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z2wt6"] Dec 16 13:31:02 crc kubenswrapper[4805]: I1216 13:31:02.007064 4805 scope.go:117] "RemoveContainer" containerID="81bd240fa2ccb6b0b674fdb18147cda63486cbe640926fa477cbfc294fc8684e" Dec 16 13:31:03 crc kubenswrapper[4805]: I1216 13:31:03.675113 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z2wt6" podUID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" containerName="registry-server" containerID="cri-o://b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9" gracePeriod=2 Dec 16 13:31:03 crc kubenswrapper[4805]: E1216 13:31:03.961374 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice/crio-710b5dce56fcd1bc84a3bbbd0f567068eed969b6d185d2e7da3a984e3b30687b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice\": RecentStats: unable to find data in memory cache]" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.340673 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.466975 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsd4g\" (UniqueName: \"kubernetes.io/projected/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-kube-api-access-zsd4g\") pod \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.467222 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-utilities\") pod \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.467537 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-catalog-content\") pod \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\" (UID: \"7a794705-2f5d-4a29-a98f-6c4d1dab4b22\") " Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.469633 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-utilities" (OuterVolumeSpecName: "utilities") pod "7a794705-2f5d-4a29-a98f-6c4d1dab4b22" (UID: "7a794705-2f5d-4a29-a98f-6c4d1dab4b22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.476979 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-kube-api-access-zsd4g" (OuterVolumeSpecName: "kube-api-access-zsd4g") pod "7a794705-2f5d-4a29-a98f-6c4d1dab4b22" (UID: "7a794705-2f5d-4a29-a98f-6c4d1dab4b22"). InnerVolumeSpecName "kube-api-access-zsd4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.558818 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a794705-2f5d-4a29-a98f-6c4d1dab4b22" (UID: "7a794705-2f5d-4a29-a98f-6c4d1dab4b22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.569368 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.569700 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsd4g\" (UniqueName: \"kubernetes.io/projected/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-kube-api-access-zsd4g\") on node \"crc\" DevicePath \"\"" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.569797 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a794705-2f5d-4a29-a98f-6c4d1dab4b22-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.688163 4805 generic.go:334] "Generic (PLEG): container finished" podID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" containerID="b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9" exitCode=0 Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.688208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wt6" event={"ID":"7a794705-2f5d-4a29-a98f-6c4d1dab4b22","Type":"ContainerDied","Data":"b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9"} Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.688239 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2wt6" event={"ID":"7a794705-2f5d-4a29-a98f-6c4d1dab4b22","Type":"ContainerDied","Data":"72621205463dc58482296255a33c8d2a6cfb2d10e1ed8ae760093a20a8097dfd"} Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.688257 4805 scope.go:117] "RemoveContainer" containerID="b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.688409 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2wt6" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.715592 4805 scope.go:117] "RemoveContainer" containerID="4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.757131 4805 scope.go:117] "RemoveContainer" containerID="6735a799406d2b00c5d5df77f499cb617fd0383f1e41bb065ae3ad9df2d798b5" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.765557 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z2wt6"] Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.786129 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z2wt6"] Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.800665 4805 scope.go:117] "RemoveContainer" containerID="b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9" Dec 16 13:31:04 crc kubenswrapper[4805]: E1216 13:31:04.801252 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9\": container with ID starting with b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9 not found: ID does not exist" containerID="b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.801370 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9"} err="failed to get container status \"b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9\": rpc error: code = NotFound desc = could not find container \"b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9\": container with ID starting with b8687819c81a32c98d65f29dfe5056008fe94956be6d95f829be8d07b1b40bc9 not found: ID does not exist" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.801505 4805 scope.go:117] "RemoveContainer" containerID="4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9" Dec 16 13:31:04 crc kubenswrapper[4805]: E1216 13:31:04.801795 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9\": container with ID starting with 4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9 not found: ID does not exist" containerID="4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.801901 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9"} err="failed to get container status \"4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9\": rpc error: code = NotFound desc = could not find container \"4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9\": container with ID starting with 4f151647374e9d5f278a17b7bd4547dd8dcb05771245f91a7248c07daf503ed9 not found: ID does not exist" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.801969 4805 scope.go:117] "RemoveContainer" containerID="6735a799406d2b00c5d5df77f499cb617fd0383f1e41bb065ae3ad9df2d798b5" Dec 16 13:31:04 crc kubenswrapper[4805]: E1216 13:31:04.802261 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6735a799406d2b00c5d5df77f499cb617fd0383f1e41bb065ae3ad9df2d798b5\": container with ID starting with 6735a799406d2b00c5d5df77f499cb617fd0383f1e41bb065ae3ad9df2d798b5 not found: ID does not exist" containerID="6735a799406d2b00c5d5df77f499cb617fd0383f1e41bb065ae3ad9df2d798b5" Dec 16 13:31:04 crc kubenswrapper[4805]: I1216 13:31:04.802340 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6735a799406d2b00c5d5df77f499cb617fd0383f1e41bb065ae3ad9df2d798b5"} err="failed to get container status \"6735a799406d2b00c5d5df77f499cb617fd0383f1e41bb065ae3ad9df2d798b5\": rpc error: code = NotFound desc = could not find container \"6735a799406d2b00c5d5df77f499cb617fd0383f1e41bb065ae3ad9df2d798b5\": container with ID starting with 6735a799406d2b00c5d5df77f499cb617fd0383f1e41bb065ae3ad9df2d798b5 not found: ID does not exist" Dec 16 13:31:06 crc kubenswrapper[4805]: I1216 13:31:06.536125 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" path="/var/lib/kubelet/pods/7a794705-2f5d-4a29-a98f-6c4d1dab4b22/volumes" Dec 16 13:31:11 crc kubenswrapper[4805]: I1216 13:31:11.524804 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:31:11 crc kubenswrapper[4805]: E1216 13:31:11.525636 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:31:14 crc kubenswrapper[4805]: E1216 13:31:14.312953 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice/crio-710b5dce56fcd1bc84a3bbbd0f567068eed969b6d185d2e7da3a984e3b30687b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice\": RecentStats: unable to find data in memory cache]" Dec 16 13:31:24 crc kubenswrapper[4805]: E1216 13:31:24.584530 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice/crio-710b5dce56fcd1bc84a3bbbd0f567068eed969b6d185d2e7da3a984e3b30687b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod726d2d40_a173_4cfa_8a58_160f3ef1bf5c.slice\": RecentStats: unable to find data in memory cache]" Dec 16 13:31:26 crc kubenswrapper[4805]: I1216 13:31:26.540950 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:31:26 crc kubenswrapper[4805]: E1216 13:31:26.541545 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:31:37 crc kubenswrapper[4805]: I1216 13:31:37.523303 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:31:37 crc kubenswrapper[4805]: E1216 13:31:37.524007 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:31:52 crc kubenswrapper[4805]: I1216 13:31:52.523010 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:31:52 crc kubenswrapper[4805]: E1216 13:31:52.523776 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:32:04 crc kubenswrapper[4805]: I1216 13:32:04.523032 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:32:04 crc kubenswrapper[4805]: E1216 13:32:04.523869 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:32:17 crc kubenswrapper[4805]: I1216 13:32:17.522415 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:32:17 crc kubenswrapper[4805]: E1216 13:32:17.523374 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:32:28 crc kubenswrapper[4805]: I1216 13:32:28.523583 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:32:28 crc kubenswrapper[4805]: E1216 13:32:28.524541 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:32:41 crc kubenswrapper[4805]: I1216 13:32:41.525045 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:32:41 crc kubenswrapper[4805]: E1216 13:32:41.526455 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:32:56 crc kubenswrapper[4805]: I1216 13:32:56.538117 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:32:56 crc kubenswrapper[4805]: E1216 13:32:56.538940 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:33:11 crc kubenswrapper[4805]: I1216 13:33:11.522666 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:33:11 crc kubenswrapper[4805]: E1216 13:33:11.523426 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:33:13 crc kubenswrapper[4805]: I1216 13:33:13.056461 4805 generic.go:334] "Generic (PLEG): container finished" podID="ecc77c59-2496-4007-97a8-6f3dee9fc5ba" containerID="9a4a4d96c31fae872954b0451c4eb11deb8b60aa5b64228478998869bbee2407" exitCode=0 Dec 16 13:33:13 crc kubenswrapper[4805]: I1216 13:33:13.056545 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9f5ft/must-gather-ntr57" event={"ID":"ecc77c59-2496-4007-97a8-6f3dee9fc5ba","Type":"ContainerDied","Data":"9a4a4d96c31fae872954b0451c4eb11deb8b60aa5b64228478998869bbee2407"} Dec 16 13:33:13 crc kubenswrapper[4805]: I1216 13:33:13.057419 4805 scope.go:117] "RemoveContainer" containerID="9a4a4d96c31fae872954b0451c4eb11deb8b60aa5b64228478998869bbee2407" Dec 16 13:33:13 crc kubenswrapper[4805]: I1216 13:33:13.290432 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9f5ft_must-gather-ntr57_ecc77c59-2496-4007-97a8-6f3dee9fc5ba/gather/0.log" Dec 16 13:33:26 crc kubenswrapper[4805]: I1216 13:33:26.534537 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:33:26 crc kubenswrapper[4805]: E1216 13:33:26.535320 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:33:29 crc kubenswrapper[4805]: I1216 13:33:29.051421 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9f5ft/must-gather-ntr57"] Dec 16 13:33:29 crc kubenswrapper[4805]: I1216 13:33:29.052108 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9f5ft/must-gather-ntr57" podUID="ecc77c59-2496-4007-97a8-6f3dee9fc5ba" containerName="copy" containerID="cri-o://51a24617866229924af05cae5ef8ec6f397139516924f6e8361f83ba5febad14" gracePeriod=2 Dec 16 13:33:29 crc kubenswrapper[4805]: I1216 13:33:29.069723 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9f5ft/must-gather-ntr57"] Dec 16 13:33:29 crc kubenswrapper[4805]: I1216 13:33:29.266180 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9f5ft_must-gather-ntr57_ecc77c59-2496-4007-97a8-6f3dee9fc5ba/copy/0.log" Dec 16 13:33:29 crc kubenswrapper[4805]: I1216 13:33:29.266608 4805 generic.go:334] "Generic (PLEG): container finished" podID="ecc77c59-2496-4007-97a8-6f3dee9fc5ba" containerID="51a24617866229924af05cae5ef8ec6f397139516924f6e8361f83ba5febad14" exitCode=143 Dec 16 13:33:29 crc kubenswrapper[4805]: I1216 13:33:29.589934 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9f5ft_must-gather-ntr57_ecc77c59-2496-4007-97a8-6f3dee9fc5ba/copy/0.log" Dec 16 13:33:29 crc kubenswrapper[4805]: I1216 13:33:29.590749 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/must-gather-ntr57" Dec 16 13:33:29 crc kubenswrapper[4805]: I1216 13:33:29.782848 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-must-gather-output\") pod \"ecc77c59-2496-4007-97a8-6f3dee9fc5ba\" (UID: \"ecc77c59-2496-4007-97a8-6f3dee9fc5ba\") " Dec 16 13:33:29 crc kubenswrapper[4805]: I1216 13:33:29.782981 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5rnw\" (UniqueName: \"kubernetes.io/projected/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-kube-api-access-d5rnw\") pod \"ecc77c59-2496-4007-97a8-6f3dee9fc5ba\" (UID: \"ecc77c59-2496-4007-97a8-6f3dee9fc5ba\") " Dec 16 13:33:29 crc kubenswrapper[4805]: I1216 13:33:29.812466 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-kube-api-access-d5rnw" (OuterVolumeSpecName: "kube-api-access-d5rnw") pod "ecc77c59-2496-4007-97a8-6f3dee9fc5ba" (UID: "ecc77c59-2496-4007-97a8-6f3dee9fc5ba"). InnerVolumeSpecName "kube-api-access-d5rnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:33:29 crc kubenswrapper[4805]: I1216 13:33:29.887252 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5rnw\" (UniqueName: \"kubernetes.io/projected/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-kube-api-access-d5rnw\") on node \"crc\" DevicePath \"\"" Dec 16 13:33:30 crc kubenswrapper[4805]: I1216 13:33:30.035231 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ecc77c59-2496-4007-97a8-6f3dee9fc5ba" (UID: "ecc77c59-2496-4007-97a8-6f3dee9fc5ba"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:33:30 crc kubenswrapper[4805]: I1216 13:33:30.090395 4805 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ecc77c59-2496-4007-97a8-6f3dee9fc5ba-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 16 13:33:30 crc kubenswrapper[4805]: I1216 13:33:30.278467 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9f5ft_must-gather-ntr57_ecc77c59-2496-4007-97a8-6f3dee9fc5ba/copy/0.log" Dec 16 13:33:30 crc kubenswrapper[4805]: I1216 13:33:30.279010 4805 scope.go:117] "RemoveContainer" containerID="51a24617866229924af05cae5ef8ec6f397139516924f6e8361f83ba5febad14" Dec 16 13:33:30 crc kubenswrapper[4805]: I1216 13:33:30.279056 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9f5ft/must-gather-ntr57" Dec 16 13:33:30 crc kubenswrapper[4805]: I1216 13:33:30.311087 4805 scope.go:117] "RemoveContainer" containerID="9a4a4d96c31fae872954b0451c4eb11deb8b60aa5b64228478998869bbee2407" Dec 16 13:33:30 crc kubenswrapper[4805]: I1216 13:33:30.536038 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc77c59-2496-4007-97a8-6f3dee9fc5ba" path="/var/lib/kubelet/pods/ecc77c59-2496-4007-97a8-6f3dee9fc5ba/volumes" Dec 16 13:33:41 crc kubenswrapper[4805]: I1216 13:33:41.526573 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:33:41 crc kubenswrapper[4805]: E1216 13:33:41.527741 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:33:55 crc kubenswrapper[4805]: I1216 13:33:55.523257 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:33:55 crc kubenswrapper[4805]: E1216 13:33:55.524316 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:34:02 crc kubenswrapper[4805]: I1216 13:34:02.212828 4805 scope.go:117] "RemoveContainer" containerID="abeaf6b1ef574b0344d7d01611ad5d8b46eaece3c1ec8de803cb0afb9c3ae413" Dec 16 13:34:06 crc kubenswrapper[4805]: I1216 13:34:06.530933 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:34:06 crc kubenswrapper[4805]: E1216 13:34:06.531618 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:34:21 crc kubenswrapper[4805]: I1216 13:34:21.524083 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:34:21 crc kubenswrapper[4805]: E1216 13:34:21.524945 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:34:35 crc kubenswrapper[4805]: I1216 13:34:35.523019 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:34:35 crc kubenswrapper[4805]: E1216 13:34:35.523747 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:34:46 crc kubenswrapper[4805]: I1216 13:34:46.537047 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:34:46 crc kubenswrapper[4805]: E1216 13:34:46.539615 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:35:01 crc kubenswrapper[4805]: I1216 13:35:01.522584 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:35:01 crc kubenswrapper[4805]: E1216 13:35:01.523464 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:35:15 crc kubenswrapper[4805]: I1216 13:35:15.523672 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:35:15 crc kubenswrapper[4805]: E1216 13:35:15.525581 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:35:26 crc kubenswrapper[4805]: I1216 13:35:26.534718 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:35:26 crc kubenswrapper[4805]: E1216 13:35:26.535830 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:35:40 crc kubenswrapper[4805]: I1216 13:35:40.523279 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:35:40 crc kubenswrapper[4805]: E1216 13:35:40.524188 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:35:53 crc kubenswrapper[4805]: I1216 13:35:53.523210 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:35:53 crc kubenswrapper[4805]: E1216 13:35:53.523957 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5gm98_openshift-machine-config-operator(ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9)\"" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" podUID="ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9" Dec 16 13:36:07 crc kubenswrapper[4805]: I1216 13:36:07.523223 4805 scope.go:117] "RemoveContainer" containerID="9c3a0431767b8e363a6f65ee9920b34e664382909c03e39b4c72a391edd4a924" Dec 16 13:36:08 crc kubenswrapper[4805]: I1216 13:36:08.052624 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5gm98" event={"ID":"ac31fa32-4bce-4ac3-ba7c-e3b0da69c3b9","Type":"ContainerStarted","Data":"3f79797adae10086d467d1358ae494fd891ff4c731441dfc32b1893de6fcbc7e"} Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.793040 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gcs96"] Dec 16 13:36:09 crc kubenswrapper[4805]: E1216 13:36:09.793988 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" containerName="extract-utilities" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.794013 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" containerName="extract-utilities" Dec 16 13:36:09 crc kubenswrapper[4805]: E1216 13:36:09.794030 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" containerName="extract-content" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.794038 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" containerName="extract-content" Dec 16 13:36:09 crc kubenswrapper[4805]: E1216 13:36:09.794056 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" containerName="registry-server" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.794064 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" containerName="registry-server" Dec 16 13:36:09 crc kubenswrapper[4805]: E1216 13:36:09.794110 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc77c59-2496-4007-97a8-6f3dee9fc5ba" containerName="gather" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.794118 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc77c59-2496-4007-97a8-6f3dee9fc5ba" containerName="gather" Dec 16 13:36:09 crc kubenswrapper[4805]: E1216 13:36:09.794160 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc77c59-2496-4007-97a8-6f3dee9fc5ba" containerName="copy" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.794167 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc77c59-2496-4007-97a8-6f3dee9fc5ba" containerName="copy" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.794456 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc77c59-2496-4007-97a8-6f3dee9fc5ba" containerName="copy" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.794503 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a794705-2f5d-4a29-a98f-6c4d1dab4b22" containerName="registry-server" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.794518 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc77c59-2496-4007-97a8-6f3dee9fc5ba" containerName="gather" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.796598 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.842138 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gcs96"] Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.908863 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324-catalog-content\") pod \"redhat-operators-gcs96\" (UID: \"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324\") " pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.908996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5k5z\" (UniqueName: \"kubernetes.io/projected/a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324-kube-api-access-n5k5z\") pod \"redhat-operators-gcs96\" (UID: \"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324\") " pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:09 crc kubenswrapper[4805]: I1216 13:36:09.909065 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324-utilities\") pod \"redhat-operators-gcs96\" (UID: \"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324\") " pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:10 crc kubenswrapper[4805]: I1216 13:36:10.010830 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5k5z\" (UniqueName: \"kubernetes.io/projected/a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324-kube-api-access-n5k5z\") pod \"redhat-operators-gcs96\" (UID: \"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324\") " pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:10 crc kubenswrapper[4805]: I1216 13:36:10.011300 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324-utilities\") pod \"redhat-operators-gcs96\" (UID: \"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324\") " pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:10 crc kubenswrapper[4805]: I1216 13:36:10.011522 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324-catalog-content\") pod \"redhat-operators-gcs96\" (UID: \"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324\") " pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:10 crc kubenswrapper[4805]: I1216 13:36:10.011821 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324-utilities\") pod \"redhat-operators-gcs96\" (UID: \"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324\") " pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:10 crc kubenswrapper[4805]: I1216 13:36:10.012128 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324-catalog-content\") pod \"redhat-operators-gcs96\" (UID: \"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324\") " pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:10 crc kubenswrapper[4805]: I1216 13:36:10.045593 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5k5z\" (UniqueName: \"kubernetes.io/projected/a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324-kube-api-access-n5k5z\") pod \"redhat-operators-gcs96\" (UID: \"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324\") " pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:10 crc kubenswrapper[4805]: I1216 13:36:10.135008 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:10 crc kubenswrapper[4805]: I1216 13:36:10.662647 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gcs96"] Dec 16 13:36:11 crc kubenswrapper[4805]: I1216 13:36:11.087436 4805 generic.go:334] "Generic (PLEG): container finished" podID="a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324" containerID="0625cda0c81ed2e949e3fe24556426572e86954b541b9da7de0f58c0416dee87" exitCode=0 Dec 16 13:36:11 crc kubenswrapper[4805]: I1216 13:36:11.087845 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcs96" event={"ID":"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324","Type":"ContainerDied","Data":"0625cda0c81ed2e949e3fe24556426572e86954b541b9da7de0f58c0416dee87"} Dec 16 13:36:11 crc kubenswrapper[4805]: I1216 13:36:11.087881 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcs96" event={"ID":"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324","Type":"ContainerStarted","Data":"0e81badfb7ffadf0539b2a753202a3ee79c15cb4c227bd15484c4a01f8a63add"} Dec 16 13:36:11 crc kubenswrapper[4805]: I1216 13:36:11.091083 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 16 13:36:22 crc kubenswrapper[4805]: I1216 13:36:22.256862 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcs96" event={"ID":"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324","Type":"ContainerStarted","Data":"80c32489144c307f6ec421b7a449128fee307aa68b9eef6e946aaffb6e9b6810"} Dec 16 13:36:24 crc kubenswrapper[4805]: I1216 13:36:24.277124 4805 generic.go:334] "Generic (PLEG): container finished" podID="a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324" containerID="80c32489144c307f6ec421b7a449128fee307aa68b9eef6e946aaffb6e9b6810" exitCode=0 Dec 16 13:36:24 crc kubenswrapper[4805]: I1216 13:36:24.277193 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcs96" event={"ID":"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324","Type":"ContainerDied","Data":"80c32489144c307f6ec421b7a449128fee307aa68b9eef6e946aaffb6e9b6810"} Dec 16 13:36:26 crc kubenswrapper[4805]: I1216 13:36:26.307951 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcs96" event={"ID":"a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324","Type":"ContainerStarted","Data":"9e99abae523f656471891d015c8754e6faaa1c656e1e4f5205d4026edc5be7ca"} Dec 16 13:36:26 crc kubenswrapper[4805]: I1216 13:36:26.333507 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gcs96" podStartSLOduration=2.9428412760000002 podStartE2EDuration="17.333470698s" podCreationTimestamp="2025-12-16 13:36:09 +0000 UTC" firstStartedPulling="2025-12-16 13:36:11.090841784 +0000 UTC m=+6044.809099589" lastFinishedPulling="2025-12-16 13:36:25.481471206 +0000 UTC m=+6059.199729011" observedRunningTime="2025-12-16 13:36:26.33076243 +0000 UTC m=+6060.049020235" watchObservedRunningTime="2025-12-16 13:36:26.333470698 +0000 UTC m=+6060.051728513" Dec 16 13:36:30 crc kubenswrapper[4805]: I1216 13:36:30.135425 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:30 crc kubenswrapper[4805]: I1216 13:36:30.137417 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:31 crc kubenswrapper[4805]: I1216 13:36:31.209673 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gcs96" podUID="a6d30aeb-1a24-4b6f-89ec-d2af4cbb4324" containerName="registry-server" probeResult="failure" output=< Dec 16 13:36:31 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Dec 16 13:36:31 crc kubenswrapper[4805]: > Dec 16 13:36:40 crc kubenswrapper[4805]: I1216 13:36:40.203621 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:40 crc kubenswrapper[4805]: I1216 13:36:40.261401 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gcs96" Dec 16 13:36:40 crc kubenswrapper[4805]: I1216 13:36:40.814643 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gcs96"] Dec 16 13:36:40 crc kubenswrapper[4805]: I1216 13:36:40.990375 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptrmm"] Dec 16 13:36:40 crc kubenswrapper[4805]: I1216 13:36:40.990622 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ptrmm" podUID="2eb513c8-933e-42ca-8720-a0fed194ea8d" containerName="registry-server" containerID="cri-o://034024ebf9a60730f6054a3a2ac454aebda17d5c6fe30d91ff1446528b587c63" gracePeriod=2 Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.492792 4805 generic.go:334] "Generic (PLEG): container finished" podID="2eb513c8-933e-42ca-8720-a0fed194ea8d" containerID="034024ebf9a60730f6054a3a2ac454aebda17d5c6fe30d91ff1446528b587c63" exitCode=0 Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.493074 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptrmm" event={"ID":"2eb513c8-933e-42ca-8720-a0fed194ea8d","Type":"ContainerDied","Data":"034024ebf9a60730f6054a3a2ac454aebda17d5c6fe30d91ff1446528b587c63"} Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.493879 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptrmm" event={"ID":"2eb513c8-933e-42ca-8720-a0fed194ea8d","Type":"ContainerDied","Data":"9a5ad0bfc3e6ccc5603135ebd3354dee7761bee161c9b2a2ae24a144869277d6"} Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.493903 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a5ad0bfc3e6ccc5603135ebd3354dee7761bee161c9b2a2ae24a144869277d6" Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.517080 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.637903 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-catalog-content\") pod \"2eb513c8-933e-42ca-8720-a0fed194ea8d\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.638068 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp6hw\" (UniqueName: \"kubernetes.io/projected/2eb513c8-933e-42ca-8720-a0fed194ea8d-kube-api-access-kp6hw\") pod \"2eb513c8-933e-42ca-8720-a0fed194ea8d\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.638160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-utilities\") pod \"2eb513c8-933e-42ca-8720-a0fed194ea8d\" (UID: \"2eb513c8-933e-42ca-8720-a0fed194ea8d\") " Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.639310 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-utilities" (OuterVolumeSpecName: "utilities") pod "2eb513c8-933e-42ca-8720-a0fed194ea8d" (UID: "2eb513c8-933e-42ca-8720-a0fed194ea8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.648542 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eb513c8-933e-42ca-8720-a0fed194ea8d-kube-api-access-kp6hw" (OuterVolumeSpecName: "kube-api-access-kp6hw") pod "2eb513c8-933e-42ca-8720-a0fed194ea8d" (UID: "2eb513c8-933e-42ca-8720-a0fed194ea8d"). InnerVolumeSpecName "kube-api-access-kp6hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.741218 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-utilities\") on node \"crc\" DevicePath \"\"" Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.741257 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp6hw\" (UniqueName: \"kubernetes.io/projected/2eb513c8-933e-42ca-8720-a0fed194ea8d-kube-api-access-kp6hw\") on node \"crc\" DevicePath \"\"" Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.760761 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2eb513c8-933e-42ca-8720-a0fed194ea8d" (UID: "2eb513c8-933e-42ca-8720-a0fed194ea8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 16 13:36:41 crc kubenswrapper[4805]: I1216 13:36:41.843459 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb513c8-933e-42ca-8720-a0fed194ea8d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 16 13:36:42 crc kubenswrapper[4805]: I1216 13:36:42.502094 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptrmm" Dec 16 13:36:42 crc kubenswrapper[4805]: I1216 13:36:42.541725 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptrmm"] Dec 16 13:36:42 crc kubenswrapper[4805]: I1216 13:36:42.563960 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ptrmm"] Dec 16 13:36:44 crc kubenswrapper[4805]: I1216 13:36:44.532999 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb513c8-933e-42ca-8720-a0fed194ea8d" path="/var/lib/kubelet/pods/2eb513c8-933e-42ca-8720-a0fed194ea8d/volumes" Dec 16 13:37:02 crc kubenswrapper[4805]: I1216 13:37:02.339110 4805 scope.go:117] "RemoveContainer" containerID="6b5171f16a6f10888be28199b9c1f318cefc7c903d5fe059240276e67ab173e1" Dec 16 13:37:02 crc kubenswrapper[4805]: I1216 13:37:02.371421 4805 scope.go:117] "RemoveContainer" containerID="ec9d80fb6101bdc4055e8ef062adc517cdcc2f78f765143753e4d3e6d7658ecf" Dec 16 13:37:02 crc kubenswrapper[4805]: I1216 13:37:02.420796 4805 scope.go:117] "RemoveContainer" containerID="cc651b8ff5f43a5eea71ca844e005b9b4076dcb61790f48105d7f19be3aee6bc" Dec 16 13:37:02 crc kubenswrapper[4805]: I1216 13:37:02.466890 4805 scope.go:117] "RemoveContainer" containerID="3e1f48a942a230796fe2dac7f332e3767fce167e5bdf800ad30c61f908644d9a" Dec 16 13:37:02 crc kubenswrapper[4805]: I1216 13:37:02.509047 4805 scope.go:117] "RemoveContainer" containerID="034024ebf9a60730f6054a3a2ac454aebda17d5c6fe30d91ff1446528b587c63" Dec 16 13:37:02 crc kubenswrapper[4805]: I1216 13:37:02.558718 4805 scope.go:117] "RemoveContainer" containerID="0eda3f4406fb0f37ad4425b7ede6162262448a5e23c105c3cb3b8bb9acb17e05"